Welcome to Rule Breaker, a newsletter that challenges conventional thinking in business and society.
If you aren’t getting new posts in your inbox, subscribe!
I’ve caught the AI fever going around the tech world
The software which mimics human intelligence is consuming a lot of attention from techies and investors. It can do a lot and will be the making of the next generation of unicorns.
To see what all the hype is about, I started to play around with it. My mind was blown and I now use generative AI tools to do my everyday work (including writing this newsletter).
My mind has also become full of thoughts about what it can do, how it will change our world and what harm it can cause.
Which led me to this Rule Breaker-y opinion: We need to focus just as much on responsible and ethical human decision making as we are on coding optimal decision making into AI software.
Come along with me as I explain my thoughts…
AI can do some crazy cool stuff and its possibilities feel limitless
My neighbour Bill and I have an ongoing text discussion about the neat thing we’re discovering.
He’s loving using it for Visual Basics programming. I’m loving it for editing my writing, drafting emails, and researching for this newsletter. I’ve started using it so much I’ve started to affectionately call ChatGPT my Junior Associate.
What’s also so impressive is how it can jump from one thing to another. I can ask it to write a joke in the style of Jerry Seinfeld and then switch to analysis of the gender pay gap in the UK.
The possibilities feel limitless in how this technology can be used and it really is more a question of our own imagination.
AI is going to fundamentally change how we do almost everything in our world
Generative AI is great because it allows anyone to interact with AI and see what it can do.
After playing with it for a while, I got an even better sense of what someone with an advanced computer science degree could do with this technology. I started thinking about all the ways a computer which has the capacity of a human brain could be used in my day-to-day.
Here are just some ideas of how AI can be used in my seven minute drive to drop my girls off at daycare (let’s assume I’m already in an autonomous driving vehicle powered by AI…):
AI could passively listen to my conversation with my kids and how effectively I am resolving a fight over a toy. When I am in a calm mood (which it can tell by the smart watch I’m wearing to track my stress levels), it will suggest alternative ways I could have better handled their argument given their stage of emotional and mental development and my goal of applying a nurturing, positive parenting style.
AI could automatically suggest options when my daughter asks what is for dinner. The suggestions will be based on what is in the fridge and freezer, how much time I have in the afternoon (because it knows my schedule) and what my kids have liked eating in the past because it has monitored how much coercion is needed at each meal time and how much leftovers there have been.
AI could also:
Suggest different times to leave in the morning to avoid traffic and suggest how we change our routine at home so we can adjust to this new commute
Automatically choose the music we listen to based on moods and what we’ve listened to in the past (please no more Cocomelon!)
Prepare a message to the daycare staff on the overall mood of my children based on the last 12 hours so that they know what to expect
You know how the brakepad replaced the need for horseshoes? Well AI is going to be the car to just about everything we do in life. And that is likely only a slight exaggeration.
AI will never be completely historically accurate
AI is only as good as the data and information it is trained on. In ChatGPT’s case, it only has knowledge up until September 2021.
I stumbled upon this limit in historical knowledge when doing research for my International Women’s Day post. I asked my Junior Associate (a.k.a. ChatGPT) to suggest social reforms that women should be advocating for and I was surprised to see that it didn’t suggest reproductive rights. It was trying to tell me that Roe v. Wade was still upheld in the US and after a bit of arguing, it fessed up to not knowing what has happened in the past two years.
This specific gap in historical information is an easy fix but that led me to wonder… What about the cases where there are different and valid accounts of what happened?
There are many histories - or often, herstories - we haven’t documented or are aware of so an AI will never be able to truly be accurate in this domain.
AI will always perpetuate some level of systemic bias
AI is used to rid bias in our common processes, such as hiring, credit scoring and even providing judges with criminal sentencing recommendations.
However, these AI models rely on our current-day understanding of how bias shows up in our world. We use AI to review job descriptions and filter out words like “assertive” or “rockstar” so that more women will want to apply for a job. We do this because we as society have agreed that gender-specific language is not equitable.
What about when we change our understanding of what bias is? An AI model will have to wait until there is common agreement on what is and isn’t biased. Until then, it will only perpetuate the harm - as is already evident in the AI applications in our criminal justice system.
Therefore, responsible AI should include educating humans on ethical decisions, not just coding it into the software
There is a big risk if we pass over the reins to a computer brain to make the final decisions on things.
Here’s my logic…
If AI will become present in many elements of our life
And it will be impossible to be both historically accurate and completely devoid of bias
Then are we better served to work on our human selves instead?
Much thought and work is being given to the responsible and ethical uses and programming of AI. Given the impending ubiquity of artificial intelligence applications, do we have an opportunity to work on the challenges humans have built into our systems since the beginning of human civilization?
Some practical ideas on how this could show up:
Every AI output has a little warning that comes with it saying “this was created by a computer, as the human it is your ultimate decision if this is a good thing for you and society”.
Training and courses at schools and work on the responsible use of AI
Compulsory audits of AI decisions
AI is going to be the world’s ultimate “doer” and we as humans need to remember that we need to still be the ultimate “thinkers”.
So there you have it. Just a little ethical thinking on AI. There is so much more to discover and debate in this area.
If you’re thinking about this or know of a good debate happening on this topic, comment and share your thoughts.
And if you want to see where my brain goes, subscribe.
Stay well you ultimate thinker,
J
P.S. I’m prioritizing shipping over perfection, so this post may not convey all my thoughts perfectly. I’d love comments or questions to keep the discovery going.
P.P.S. Liked what you’re reading? Sharing is caring so pass this on to others.
AI is my junior associate too! I just fear that it will lead us all to more conformity, after all it's just a pre-determined selection of a database of what everyone supposedly thinks.