Frankenstein and AI

A funny thought came across my mind as I finished reading Frankenstein, the 1818 text, by Mary Shelley:

  • Is Sam Altman Frankenstein?
  • And AI the monster?

Probably not.

What connected the dots to me were 3 things:

  1. The unchecked pursuit for knowledge and greatness
  2. Parallels between the monster and Artificial General Intelligence (AGI).
  3. The responsibility of its creators.

Unchecked pursuit of knowledge

In the story, you can read several phrases where the author explicitly cautions about the pursuit of knowledge, as if knowledge was a bad thing you should avoid.

It is actually the unchecked pursuit that is cautionary.

Frankenstein’s ambition led to catastrophe, mirroring tech creators diving into AI without full foresight. Are they ready for the consequences?

Parallels between monster and AGI

The monster started as a blank slate. In the book, he wanted to love, and be loved. He longed for company, for knowledge, and for empathy.

Yet, he was faced with the worst of humanity. They were disgusted about his appearance, and how he looked like a monster. He was met with violence from the fear of those who met him. And with all that longing for love, and company rejected, he became anguished. This anguish set him on a path of destruction.

Now, let’s reflect on how things could go wrong if we end up building automated digital systems that are embedded into our current infrastructure, like:

A stupid example was Microsoft’s AI that turned nazi when met with Twitter.

Imagine, now, plugging an AI to all your digital life.

There’s a point to be made that AI enhances our abilities. And while that’s true, it comes with uncharted territory, and lots of ethical concerns:

  • One of them is the job displacement concern.
  • Another one is privacy.
  • And the most concerning of them is control.

Creators’ responsibility

CEOs rush to deploy game-changing tech like AI without fully considering implications. This mindset raises concerns about AI’s future impact.

If you know anything about tech CEOs is that when they get their hands on some new tech that could change the world in a huge scale, they’re all in. It’s pretty much a “shoot first, ask questions later” vibe—they launch the tech and then scramble to handle the fallout.

That kind of approach is a bit worrying, especially when we’re talking about AI.


I don’t have a conclusion.

These are just thoughts that came to mind as I finished reading Frankenstein.