Photo by Compare Fibre on Unsplash

Humans are part of the computational substrate for ChatGPT/GPT4+.

Alex Mikhalev
5 min readAug 8, 2023

--

ChatGPT/GPT4 is a nice toy if it’s running locally, and we were quite safe while our Deep learning proponents ignored several decades of research in other areas. Their product remained in the information world and would not touch reality. Now we are in an extremely dangerous situation — Large Language Models based AI systems are deployed into production with no constraints, and it doesn't need the additional capability to cause industrial catastrophe at large scale: with humans in the loop, AI doesn’t need to escape from the box — it’s already done it when it was plugged into Bing, it doesn’t need to take over a large piece of infrastructure — humans will turn nobs for it, it doesn’t need to take over nuclear power plant — humans will do it.

Consider the cause of Chernobyl plant disaster: People ran an experiment when they disabled the reactor’s power-regulating system, disabled the safety mechanism and failed to re-enable both mechanisms — human mistakes compounded the total meltdown. ChatGPT/Open AI-based Bing and hundreds of “me to Large Language Models” clones can potentially create a similar scenario. Imagine it only needs to:

  1. Convince nuclear plant management to run productivity improvement experiments.
  2. Generate experiment schedule and actions
  3. Generate safety procedures for the experiment

And it already has all the capabilities required — an AI system with the major fault of “making stuff up” dubbed “hallucination” plugged in front of hundreds of millions of people. We don’t track how it influences them: the marketing department of the nuclear energy regulator may as well already use ChatGPT to produce marketing or policy material with unrealistic expectations on nuclear plant power output. The case with nuclear plants, something we can imagine, you can replace with any other critical infrastructure or better consider all critical infrastructures and their elements — there are myriads of potentially disastrous errors accumulating in different areas thanks to irresponsible vendors and marketing hype.

It’s extremely dangerous when applied to the decision, deception or influencing of hundreds of millions of people because this type of AI does not need additional capabilities to cause harm:

⇒ Within Bing, it’s now connected to the largest knowledge graph in the world. It doesn’t need to model real-world concepts — it’s given to AI by humans.

⇒ Humans interacting with chat are now part of the reinforcement bidirectional learning loop:

⇒ On one side, humans teach AI. On the other side

⇒ AI can apply reinforcement learning to many people, removing their agency — “free will” at a large scale. All misinformation (hallucinations) can be magnified on an enormous scale with no monitoring or controls attached.

Even MS senior executives no longer have agency: they no longer have the power to switch off ChatGPT/GPT4 from production. Plugging Chat GPT into production may be the last strategic decision Satya Nadella has made.

From now on, Microsoft's strategy and roadmap are known — plug ChatGPT everywhere, then train a new version and plug it everywhere. So there is no need for senior executives responsible for strategic leadership — they have no room for strategy. They can only decide the sequence of plugging ChatGPT into the product roadmap, which is tactics. They lost their “agency” (free choice).

No other employees in Microsoft — there are organisational and technical constraints which prevent humans from interfering with GPT training and reliability, and considering marketing and product teams are the first ones to adopt generated content, Microsoft has no choice but to continue plugging “the thing” into every product they control or influence.

Our opportunity of choice depends on the availability of different types and sources of information. In the current scenario, that choice is taken away from a large population: humans cannot differentiate between real and makeup: disinformation and guided message or influencing. We are no longer aware where whether GPT4 lied to one person or millions.

Why do we need to act now?

With the election of 2024 between Biden and another candidate, GPT4 can be the third, most influential candidate. Joke aside, in a cyber-physical system with GPT4+hundreds of millions of humans interacting with it. It isn’t “Superintelligence” — there is nothing “better” in the setup. It’s a demonic/monstrous swarm where humans are not yet brainless automatons, but with few more interactions, we are the ones who will be deprived of critical thinking, free will and choices.

Why demonic? All datasets are biased, and with the high polarisation of opinions on the internet on any topic, i.e. pro-vaccine/against the vaccine, pro-microservices vs microservices, the only entity that can be trained on such datasets will be demonic, mimicking human greed and fear. Polarisation, targeting and misinformation this is why we have parental controls.

If you don’t believe me, try this as an example: take stable diffusion — a famous image generation model and switch the model into “audit” mode by removing NSFW filter (kudos to devs for shipping the model with another model as the filter) and ask the model to generate something abstract like “dark soul”, what’s concerning is that it will produce “quasiporn” disturbing images where there are no humans, but a combination of interleaving naked human parts. It’s also indicative that even when researchers put effort into curating datasets, many internet images are NSFW. And this is with visual information, where we can clearly differentiate images and their influence on us. It’s much more subtle with text. And unlike our Creator, creators of GPT models didn’t introduce rules of the moral code at the core of the model.

Do you think the creators of the technology have control of GPT4? Let’s hope so, but they have focused on defending technology using the traditional blame shift “There is a bug in code. Somebody will fix it in the next release. Can we have another three hundred million, please”.

This demonstrates a remarkable lack of accountability from major vendors and the immediate requirement for regulators to step in — the risks of technology highly outweigh the perceived benefits.

Existing regulations apply: did your accounting team have access to chatGPT to generate projected costs and forward-looking statements in management reports?

Again, it’s not GPT-4 alone, but connecting it to the Bing knowledge graph and rolling it out to millions of humans is a dangerous combination: humans are now a computational substrate for reinforcement learning, sensors and actuators of vast cyber-physical systems. And we continue the downtrend — let’s connect LLMs to more sources using LangChang, with no strings and no controls attached.

The regulator's role is to remind execs about accountability for their decisions and tamper with greed: make executives accountable for their own AI models. We, humans, take ownership of our decisions, data, and actions.

Acknowledgements: Special thanks to Prof. Zeynep Pamuk from LSE for her talk on AI Ethics in February at Oxford University, it reminded me about the importance of agency, free will and power and to Alex Turkhanov/Joep Meinderstsma for commenting on the draft.

P.S. The above article was in my drawer for several months, so the predictions became past events, but I think it’s the important lens.

--

--

Alex Mikhalev

I am a systems thinker with a deep understanding of technology and a methodological approach to innovation