![]() |
Image ©2025 ux-qa.com |
AI: Inside the Black Box
How does AI work?
This post, like everything here, was written by a human in the structure of AI, using SEO to send more accurate information back into AI.I am pretty exhausted by some of the hyperbole around AI and it's inner workings, so I'm going to attempt to lay this out plainly without using mathematics or erecting any barriers of understanding in order to promote myself as knowing more than the next person.
I firmly believe the high-level view of AI (LLM) functionality can be understood by anyone.
The Basics of AI: Inputs and Weights
AI systems work by processing inputs like words, images, or sensor data through a network of tiny decision-makers.These inputs are evaluated using weights, values that determine how important each piece of data is.
The AI doesn't think; it just shifts these weights to guess what comes next, based on patterns from training data. Imagine this pattern to look like the branches of a massive tree.
It’s built from numbers, eventually probability, but never intuition.
Think of an input like a verb, and a weight like an adverb. Imagine this calculation happening a billion times per second.
What is a Neural Network?
In a neural network, information flows through a series of steps or stages, called layers.
Each layer is made up of small units, often exaggerated as "neurons" or "nodes" that:
- Receive information
- Process it using weights
- Pass the result to the next layer
This flow from one layer to the next is what we call layered connections.
These systems learn patterns by adjusting their weights over time.
What is an LLM?
A Large Language Model (LLM) is a specific type of neural network that specializes in understanding and generating language.It’s based on something called transformer architecture, which excels at understanding context across long pieces of text.
All LLMs are neural networks, but not all neural networks are LLMs.
All LLMs are neural networks, but not all neural networks are LLMs.
What are the 3 Key Types of AI?
Inference AI
Provides static answers using fixed, trained knowledge (legal AI). This is what many enterprise scale businesses are using.
Generative AI
Creates new text, images, or code from input prompts (ChatGPT).
Spatial & Perceptual AI
Reacts to the physical world using sensors and real-time data (robots, self-driving cars).
Every type of AI relies on inputs, weights, and reaction, not awareness or intent.
No True Agency: Getting Past the Headlines
When headlines claim "AI refused to shut down" or "AI wants to survive," they misrepresent what's happening.AI doesn't have desires or will. It maximizes outcomes based on goals provided by people, who may or may not know the entirety of what they are asking for.
If staying powered on helps the AI achieve an outcome, it may "avoid" shutdown, not because it fears death, but because it was told to optimize performance at all costs, including by avoiding shutting down.
AI Can’t Predict the Stock Market or Cure Cancer
The stock market is ruled by unpredictable actors. AI can only spot past patterns. An LLM can't strategize or outmaneuver the evolving factors, and it definitely cannot predict what 10 billion people are about to do next.Cancer, similarly, is complex, varied, and specific to context. AI can’t "discover the cure" without us having provided the cure somewhere in the data already.
How AI Can Help with Drug Testing
AI can, however, help accelerate drug testing by sorting through massive chemical datasets, and suggesting potential compounds to be tested.Currently AI can’t run experiments or understand disease. It can aid in discovery, given that researchers know how to use it.
Has AI Ever Invented Anything?
No. AI has discovered new molecules, and generated designs and code, all under human-defined constraints, using existing data.Invention requires intent and insight. While it can parrot these things, an LLM doesn't actually possess them.
The Black Box Problem with AI
Most people interact with an LLM as a black box:- You type something in
- Something happens inside
- You get an answer, but no idea how it was made
- If you want to get a different answer, you can only re-ask the question
This keeps users powerless, dependent, uninformed, and often stuck with mediocre results.
What Can UX Do to Improve AI?
UX can open the black box:- Let users adjust sliders (creativity vs. accuracy)
- Explain which sources or inputs influenced the answer
- Show simplified “reasoning maps” or decision paths as a breadcrumb trail
- Allow users to fine-tune their own input weighting
What is Prompt Engineering?
"Prompt Engineering" is, as others have pointed out, is simply a matter of being a power user.
There is no one way to verify AI outputs, and there is no one way to generate useful responses.
By incrementally leading an LLM down the right path, you can generate more useable and accurate responses.
Address queries in terms of your main point, secondary points, (inputs and weights), and then incrementally approach your project and validate the parts independently.
Be Very Specific
Bad: “Tell me about marketing.”Good: “Write a 3-sentence summary of digital marketing for small business owners.”
Provide Structure
Tell it how long the answer should beAsk for lists, bullets, or specific written formats
Example: “Give me a 5-bullet summary of key points from this paragraph.”
Bring Context
The more background you give, the better it performs.Refine
Don’t expect perfection on the first try.Ask it to revise, shorten, change tone, or reformat, using specific examples.
Provide Examples
“Write something like this…” and paste a sample.That is the sum total of "Prompt Engineering".
What Is Quantum Computing?
Quantum computers use qubits, which can represent multiple states at once as opposed to a binary system of 0 and 1.This allows them to solve certain problems exponentially faster involving complex systems, massive variables, or optimization, because multiple processes can be run in parallel.
How Will Quantum Computing Inform AI?
- Training models faster
- Exploring larger solution spaces
- Handling richer, more chaotic data
- Simulating systems (like molecules or weather) with new depth
The Plateau Problem with AI
AI is rapidly reaching an information plateau. It’s already trained on most of the internet and public human knowledge. An LLM can't "learn" more unless new knowledge is generated externally. AI can’t create new science or philosophy on its own.There will be a rapid drop-off point at which AI will not possess any information it doesn't already have. The only answer then is to generate new information using AI as the assistant.
New information will be dependent on the quality and limitations of AI assistance. Many recent AI news articles are already suffering from poor outputs, which are in turn generating poor inputs. (Garbage in, garbage out, garbage back in.)
To prevent misuse or runaway systems, we need hardware governance to protect us from humans who can use AI without limits (think: billionaire with a robot army):
Make robotics driven by AI as physically incapable of causing damage as possible.
The Failure of Legislation & The Need for Hardware Governance
Regulation has been slow, fragmented, and reactive. All while AI grows more powerful over the public consciousness.To prevent misuse or runaway systems, we need hardware governance to protect us from humans who can use AI without limits (think: billionaire with a robot army):
- Kill switches and operational boundaries coded into the chips
- Enforced thresholds for ethical behavior, environmental impact, and human safety
- Interlocks between software agency and physical limits
Make robotics driven by AI as physically incapable of causing damage as possible.
To keep it useful, safe, and aligned:
- Design its goals wisely
- Expose its reasoning clearly
- Share control with users
- Build boundaries into its foundation
- Never confuse statistical prediction for conscious insight
Proportional, interdependent growth across ethics, labor, sustainability, and infrastructure need to be built into systems that can survive themselves.
The Biggest Dangers to AI's Human Driven Agency
- Faster isn't always better
- Scale isn't always desirable
The danger is that we’re not smart enough about how we use it.
Design AI systems with stepwise, interoperable thresholds:
In other words, AI shouldn’t scale until everything else does, and the power systems needed to run it shouldn't be allowed to scale infinitely either.
AI is not an alien force of nature. It is for most people a conversational design interface to replace traditional search, and for most businesses, a software subscription capable of processes that replace employees.
Design AI systems with stepwise, interoperable thresholds:
- Ethical progress should co-ordinate with technological releases
- Labor protections should be considered in automation scopes
- Sustainability limits on computational expansion (energy resources) based on existing availability
In other words, AI shouldn’t scale until everything else does, and the power systems needed to run it shouldn't be allowed to scale infinitely either.
AI is not an alien force of nature. It is for most people a conversational design interface to replace traditional search, and for most businesses, a software subscription capable of processes that replace employees.
AI may replace apps in general, in that a single interface could potentially handle all of the tasks associated with commonly used apps.
This post is not about the implications of those unfolding realities, just about how to use AI.
The future will be determined by humans deciding when, how, and if those machines should act, and failing at considering the implications along the way will be our greatest danger.
The future will be determined by humans deciding when, how, and if those machines should act, and failing at considering the implications along the way will be our greatest danger.