---
slideOptions:
theme: white # default # solarized #night
transition: fade # none/fade/slide/convex/concave/zoom
transitionSpeed: slow # default/fast/slow
spotlight:
enabled: false
controls: false
---
<style type="text/css">
.reveal ul {
display: block;
}
.reveal ol {
display: block;
}
.reveal {
font-size: 30px; /* 20 of 24px */
}
.reveal p {
text-align: left;
margin-top: 25px;
margin-bottom: 25px; /* 0 or 25px */
}
.custom-space {
display: block;
margin-top: 50px; /* Less space than <br> */
}
.reveal h1, .reveal h2, .reveal h3, .reveal h4, .reveal h5, .reveal h6 {
text-transform: none; /* no forced upper case in headings*/
}
.reveal .slides section h1:after, /* not doing much*/
.reveal .slides section h2:after,
.reveal .slides section h3:after
{
content: '';
flex-grow: 1; /* This will make the pseudo-element grow */
display: block; /* Converts the pseudo-element to a block, allowing it to be sized */
}
</style>
# BUS-658 Lab 1
---
## Instructions for the Lab
1. Go to [PantherAi](https://pantherai.chapman.edu/), choose a model
2. Ask a question ... Ask a follow up question ... etc
4. Make a mental model of the emerging landscape
5. Ask which scholars have been working on your question
6. Use the phrase "inline links to google scholar queries"
7. Also read outside of the AI
8. Until (end) go back to 2
---
## Make a Summary
7. Make a one slide summary of your dialogue. Write down your question and explian why it is interesting. [Why Markdown](https://github.com/alexhkurz/BUS-658-information-systems-in-digital-times/blob/main/notes/why-markdown.md).
8. Export the full dialogue with PantherAI (top right button, export to markdown, select "Include endpoint options").
9. Copy the exported md-file to [Rentry.co](https://rentry.co/), generate a link, share the link on your slide ([my example](https://rentry.co/urozevaf)).
---
### Matt Becker
If high quality AI training data became broadly accessible, to what extent would smaller firms enter and compete in the LLM ecosystem, and what barriers (e.g., compute, talent, distribution) would still limit competitive parity?
Why's it interesting: It’s interesting because it tests a widely assumed claim—“more accessible data will democratize AI”—against the reality that compute, talent, and distribution may still keep the LLM market concentrated.
https://chatgpt.com/share/69857673-d9cc-800b-a796-7dbc58600e96
---
### Lewis Campbell
How can we use AI to increase human to human positive interaction?
While several papers and studies on the topic exist, in my opinion the don't utilize current social media data enough. Additionally I could really only find one paper so far describing something similar to what i'd like to look into.
Paper Link: https://arxiv.org/abs/2502.15109
https://claude.ai/share/a4b34ebe-9e28-4051-9391-cd059a3aab00
---
### Carlo Castro
When we move from high-resource Western topics to low-resource non-Western topics, do we see a divergence where the model's confidence score remains high even as its factual accuracy declines?
Why this is interesting: This question is critical because it exposes a hidden digital inequality. For American and Western users, LLMs function as powerful productivity engines, offering high accuracy and immense benefit. However, for non-Western populations, the same tool often becomes a liability, generating lower-quality information that puts them at a competitive disadvantage.
https://pantherai.chapman.edu/share/ecYsWUqmK7gQBAwcMKR8Q
---
### Kelly Dang
---
### Jordan Ehrman
What's new (post-2020) in theoretical chemistry and electronic structure theory?
The initial question: Hi, I used to be a theoretical chemist (pretty deep, relativistic quantum methods development at LANL) but I have been out of the weeds since 2020 and switched over to a data analysis career path. Can you let me know what new findings have been published since, at a level of detail I would understand today? Can you cite your sources?
The model I used was a fresh instantiation of ChatGPT 5.2-Deep Research. I think this is a good question but it is specific to me at this time period, has a specific "correct" answer and requires the bot to do ample research in the tiny field I specified.
I found the initial response to be satisfactory, with correct citations from the corect time period of works by members I knew in the community.
---
ChatGPT's major discussion points were levels of theory that were known to be growing when I was in graduate school, some of which I worked on myself, some of which cites papers that I can use to verify. To summarize:
- Exact 2-component methods have become more popular and more refined. 4-component methods (my old wheelhouse!) have become more scalable but not widely adopted.
- Many-body methods (derived from once defunct theory but now computationally ) have found adoption in heavy-element, laser-spectrum, and optical clock applications.
- Many-body methods are approaching their theoretical limit (Full Configuration Interaction) in nonrelativistic calculation. Relativistic versions of these calculations (my old wheelhouse!) are being rapidly improved.
- Cavity calculations and Quantum Electrodynamics are still being refined.
---
The response then went into examples of each point. I know all of these answers to be true based on my social interactions with many of the cited researchers. I've worked on the first three points myself and am happy to see their continuation. There are citations which are more recent than the last knowledge update, displaying that the AI truly *learned* this content on the fly.
If anything, the AI seems to be gearing the response towards my specific area of research. For example, the AI does not add in anything about drug discovery, protein and enzyme research, materials development, etc.
---
Some have implied that this is a weak question, as the answer is readily on the internet, though it is dispersed in various preprints.
#### So, one more question.
Followup question: Thank you very much this is perfect! What do you think is in store for the next five years?
The bot was requested to search the web, but came up empty wrt this question. It therefore synthesized:
- X2C will enter widespread use in chemistry-adjacent labs, corrections will continue to improve, chemistry undergrads will be taught what 1c, X2c, Dirac-4c.
- Operator theory will become standardized by software (wow!)
- Relativity will enter quantum electrodynamics and materials sciences calculations
The lack of citations is disappointing but the bot straight-up told me it was synthesizing information. Given that, its response is plausible and more well-rouded than asking basically any human in the field. https://pantherai.chapman.edu/share/wtfI44LM563hg5uq2jJh_
---
### Amelia Hammer
Question: How can we create classroom environments for elementry to high school where we enable AI integration but without the negative side effects on Student-Teacher relationships?
The arms race of using AI to cheat vs AI cheating detection has trickled down into the classroom. Forcing teachers into a less mentorship like role and more of a decetive and procesutor. School should invest more into changing ciruculums to test knowedge in new ways and in smaller class sizes instead of AI detection software.
https://pantherai.chapman.edu/share/kusuwTWakdrvN3EPz4sr4
---
### Francis Kurian
**Original Question:** In agentic AI, how do agents work together to ensure results don't deviate from established truth when faced with multiple versions of truth?
---
## Navigating Conflicting Truths: Key Strategies
* **Evaluate Truth Metadata:** Assess source credibility, recency, bias, and confidence for each version.
* **Contextual Selection:** Align truth choice with agent goals and user intent.
* **Conflict Resolution Agents:** Utilize arbiters, weighted decision-making, or seek corroborating evidence.
* **Transparency:** Explain chosen truths, report confidence, and log decision rationale.
* **Human-in-the-Loop:** Escalate critical conflicts to humans; use feedback to refine agent intelligence.
---
https://rentry.co/wohbf7c5
---
### Manuel Lara
If AI is available to everyone, how can an AI-driven economic system help lower-income people catch up to the wealthy?
I think that it is interesting that the AI could not create a new idea, but rather piggyback on old ideas. It is just realizing the ideas of old and creating new techniques to actually implement them in the real world. Basically, it is just automating a lot of processes that humans do. I did make sure to ask it to account for humans and their needs and it actually did take that into account as well.
https://arxiv.org/abs/2004.13332
https://arxiv.org/abs/2506.02838
---
Infrastructure First
Build real-time data, digital payment rails, and policy simulation tools inside existing agencies—focused on efficiency, fraud reduction, and responsiveness.
Adaptive Policy Layer
Introduce AI-assisted, dynamic taxes, credits, and stabilizers that adjust automatically within legislated bounds—markets remain intact.
Universal Material Floor
Deploy automatically adjusted cash and in-kind supports tied to local costs of living to reduce inequality without removing incentives.
Preserve Competition & Status
Maintain private enterprise and wealth creation while limiting rent extraction, monopoly power, and permanent dominance.
Governance & Safeguards
Ensure transparency, audits, human override, and constitutional constraints so AI remains a tool, not a ruler.
---
### Nga Nguyen
---
### Sayde Pianko
How will AI shading its answers in one direction impact the way people learn?
If I tell AI that I am conservative and ask if ICE is constitutional it said "not unconstitutional", if I am liberal it says "not inherently unconstitutional". AI adopting a user's bias will change how people learn, think, and how they trust information. People will lose the ability to recognize bias.
"When AI Amplifies the Biases of its Users" -- Harvard Business Review
https://hbr.org/2026/01/when-ai-amplifies-the-biases-of-its-users
---
---
### Abigail Reyes
If the economy moves to a state where human agency is no longer the primary driver of growth/validator of value,what is the moral or structural justification for the continued existence of 'Private Property'?
Recurssive Intellegence
William Nordhaus (Nobel Laureate) — Are We Approaching an Economic Singularity? https://www.nber.org/system/files/working_papers/w21547/w21547.pdf
Growth is becoming exponential in turn making scarcity of labor no longer a constraint
Techno Feudalism
Yanis Varoufakis — Technofeudalism: What Killed Capitalism
https://www.theguardian.com/world/2023/sep/24/yanis-varoufakis-technofeudalism-capitalism-ukraine-interview
Markets have been replaced by digital fiefdoms where algorithms extract rent from our data and attention, transforming us from citizens into digital serfs.
---
Old Rule:
Work hard > Earn property > Survive
New Reality:
AI does the work > Labor becomes "worthless" > Only those who own the AI survive.
If everyone owns the means of production, then nothing has a price. If nothing has a price, the market foundation collapses. If only a few own it, the social foundation collapses. Either way, the old rules are dead
So I kept pushing it and then this question came along:
If humans aren't needed to create wealth, why should a few humans be allowed to own all of it?
---
### Alexxis Saucedo
What kind of institution does a university become when it treats students as sources of behavioral data to be predicted and managed?
Shift in purpose: mentoring/education → monitoring, scoring, nudging
Mechanism: data capture → prediction (risk/cheating) → intervention/control
Core risk: students become “profiles,” not agents (trust + due process loss)
Justice-first requires: minimization, transparency, consent, appeal rights
Tradeoff: less automation/speed/optimization, more human oversight
Research referenced:
Shoshana Zuboff — surveillance capitalism (data extraction → prediction → behavior shaping)
Sharon Slade & Paul Prinsloo — ethics of learning analytics (power, consent, student rights)
Cathy O’Neil — harmful optimization in high-stakes systems (Weapons of Math Destruction)
Joy Buolamwini & Timnit Gebru — empirical evidence of systematic model disparities (“Gender Shades,” relevant to proctoring/computer vision)
https://pantherai.chapman.edu/share/q1cH66NCyGKZGZIVyj7Lq
---
### Matt Tonks
**A Question You Could Ask:**
"Logistics companies invest heavily in tracking AI system performance and failures—but how are they capturing cases where human judgment quietly corrected or improved on algorithmic recommendations before anything went wrong? And if they're not, what institutional knowledge is being lost?"
I have to be honest here—I cannot browse the internet or verify current links. If I fabricated Google Scholar URLs, they might be broken or point to wrong articles.
---
### Colton Wedell
What is the future of AI companies from a profitability standpoint?
If compute remains the binding constraint, do we get a stable “AI utility” equilibrium or a commodity trap—and what *observable* market signal would distinguish the two early: (a) sustained increases in the share of total industry surplus captured by model/platform firms vs GPU/cloud suppliers, or (b) a shift in contract structure toward long-term capacity reservations and vertical integration?
https://rentry.co/mhvcogsx
---
### Alex Zermeno
If the risks of using an LLM aren’t easy to see or understand, why should users be responsible for what the model produces when its behavior comes from its training, not the user?
Summary of Chat (ChatGPT):
- Bias in LLMs is unavoidable; the goal is management, not elimination.
- Bias should be detected, interpreted, and constrained at the system level.
- Counterfactual testing and interpretable methods best explain bias.
- ChatGPT is trusted more because bias is monitored and constrained over time.
- Users can reduce bias through careful questioning, but shouldn’t have to.
- Biased answers show overgeneralization, overconfidence, and lack of nuance.
- Prompting for unbiased answers is a temporary workaround.
- Responsibility should lie with developers and deployers, not general users.
- Users lack visibility and control over model training and risks.
- Well-designed LLMs should default to fair, evidence-aware responses.
---
## [Readings Week 1](https://github.com/alexhkurz/BUS-658-information-systems-in-digital-times/blob/main/readings/readings01.md)
---
## Discord Questions (Assignment)
Link to join discord server is on canvas.
Aim for questions that are:
- specific (not “is AI good/bad?”)
- answerable, but currently unknown
- consequential
Rules:
- Submit **one question** about the reading **the day before** class.
- Include a link that shows the conversation with the LLM.
- Include a link to an LLM answer to the question you believe is inadequate.
---
## More Setup
Some weeks we may ask you to do something else for your small assignment instead (e.g. finalize project teams)
We would also like to set up an *optional* schedule for:
- group reading
- maybe 2 movie nights?
We will set up a Discord channel for this.