Back to Blog

AI and the "scary smart" future we face.

ai future of work Jan 11, 2023

“The only possible answer, I believe, is to be found in motivation - in teaching the machines to want the best for us. 'Teach' is the keyword here.  AI has no inherent predisposition to hurt us. If it eventually does, it will have learned how to do that from us” - Mo Gawdat 

I've realised that we humans have no idea how AI will play out.  And that is just a bit disconcerting.

Mo Gawdat used to be the Chief Business Officer of Google X (their innovation business) and knows his stuff when it comes to tech.  His book on AI will not leave my mind since I finished it.  If I am completely honest, I found the first 75% of the book terrifying and luckily the last part somewhat hopeful.  So if you do read it and are in the gripes of terror and hopelessness at one point, trust me and keep going. 

See, I love science fiction.  Terminator II is one of my all time favourite movies and I read a lot of sci-fi books.  The AI we tend to see in fiction is either trying to destroy us or help us.  I think I had assumed we would pretty much stay in control of this technology and that it would likely be a helpful and benevolent support to our work and lives more and more in the following decades.  And look, that might happen.  But that depends on us now.

What do I mean. 

 "What about AI's work ethics?.......selling, killing, spying and gambling. Shocking as this sounds, it is true. Most of AI's investment today is focused on performing tasks related to these four areas - though obviously, they are called by different names such as ads, recommendations, defence, security and investment....
we are creating a self-learning machine which at its prime, will become the reflection - or rather the magnification - of the cumulative human traits that created it"

The key thing to know about AI is not about how it is coded.  That's only the start.  It's like a 5 minute sketch by an architect compared to a completed skyscraper.  Think about AI code that way.  Currently we humans control the initial code The thing about AI is that it learns by itself and we don't actually have a clue about how it does this. And it learns not from its code, but by the data it gets from how we interact with it.

So if we are using AI for what Mo calls "selling, spying, killing and gambling" then this is what AI in its baby state is learning. The risk being, that as it develops further and faster, it will magnify these traits we are using it for right now and we won't have control around how this learns and develops this further.

That is the thing that does my head in and maybe yours too.  By 2049, machines will be a billion times more intelligent than the smartest human and we don't actually know what they will do.  And we will have given up control a long time ago and already have if we are honest.  The ads you get on facebook are determined by AI and no human approves each one - that would be slow and inefficient. Extrapolate that out into all parts of our lives where AI will touch in the not so distant future.

Most of you would have heard about ChatGPT in the last few weeks.  If you haven't had a play with this AI tool yet, do so.  It is pretty impressive what it can do already (and this is very much a baby AI).  I have heard people with dyslexia saying how much this will help them with writing and saving them time, other people worried we will end up with no creativity through to my teenagers slice of TikTok showing how this can help students write better essays.  It's complex.  And it's here and workplaces, schools and universities need to be thinking quickly about what is your view on this and how can this be used.  Because it is going to be used and like any tool, it can be used for good or not - a hammer can build a house or hit someone over the head - AI will be the same.

Thinking about the argument in this book, I think the more of us trying to use this tool for good right now is actually a really good thing.  Because like it or not, it is here. 

And we have to face up to the fact it will impact our work and our lives more and more in the next decade.  I expect some/many jobs to disappear.  My husband runs an Architectural Design firm.  In 10 years time will you need humans drawing architectural plans and designing buildings?  Or will AI be able to incorporate the local council regulations, the data about the slope of the site, sun direction etc and spit out 30 options based on your requirements in seconds? We think the second option is more likely.  And that's okay, there are many tasks and activities currently done by machines that humans used to do. We've adjusted in the past and new roles have replaced these.

The question will be whether there will be the same amount of new jobs created as the roles that will be replaced?  I am thinking this is unlikely.  And is that a bad thing as this 40 hour+ work week is a fairly new construct in all of humanity's history.  We think it is normal but the general consensus from historians is that hunter-gatherer societies spent about 2 hours a day doing "work" in order to live and the rest of the time socialising, creating art, building mastery in craft etc.  So maybe 98% of time since humans started walking, we worked around 14 hours a week.

I'm grappling with this myself as I feel my purpose and contribution to society is through my work and I worry if people lose that connection that this will have an impact on mental health.  Yet I also look at the current statistics about unhappiness and mental health so maybe our current ways are not working. Maybe so many people don't have purpose in their work and because this takes so much of their time, they don't have a chance to find this in other areas?

I like books that leave me asking more questions that when I started.  And as you can see I have many questions and very little answers about this.  The one thing I know for sure is that our work and our lives will change.  And far sooner than we expect.  AI is about to hit exponential growth and we can only do what we can now to influence the values it grows up with.

Let's just say, based on Mo's example, I now say please and thank you to the AI in my car.