Knowledge

AI and Market Research

05 Apr 2023 | Research & Business Knowledge

Article by ICG member Arthur Fletcher, DataPogo.com

Part 1.  I, ChatGPT

 

After 40+ years I started to re-read Asimov’s robot novels.  I remember them being captivating, illuminating and challenging as they describe the development and application of robots and their relationship with the human race.  Whilst robots were meant to serve and assist humans, they often developed their own sense of morality and consciousness, leading to conflicts with human society. In some cases, humans feared and mistrusted robots, while in others, they relied heavily on them for their everyday needs. Asimov’s robots raise questions about the ethics and implications of advanced artificial intelligence and its impact on human society.

Asimov’s robots had highly developed ‘positronic’ brains so I wondered how similar or different their operation and the way they ‘think’ might be to AI and ChatGPT – so I asked the ChatGPT bot for some clarification (or is it an opinion – as it used the personal pronoun ‘I’ in its response?).

ChatGPT bot describes Asimov’s fictional positronic brain as having ‘advanced intelligence and decision-making capabilities’ and describes ChatGPT as ‘technology based on machine learning and natural language processing algorithms, designed to simulate human-like conversation.’  ChatGPT draws the distinction between the two by describing ChatGPT as based on real-world scientific principles and technologies and Asimov’s robots on fictional technology.  Well, that’s alright then…..

Asimov started writing his robot stories in the 1940s with the compilation novel ‘I, Robot’, published in 1950.  The first stories (with robots) are set in 1998, others move from 2021 through to 2052 (a story that concerned a politician who may or may not be a robot and whether the machines that order the economic systems are planning a war against humanity.)  Suffice to say that these robot stories are set in our time.

Asimov’s robots had to have some sort of ChatGPT equivalent in order to communicate with humans, so at what point do ChatGPT enabled devices have ‘decision-making’ capabilities?  Well the answer is all of them…. and that’s the point.  AI supports decision making in all aspects of society – business, government, healthcare, climate etc. – but where are the decisions made and who decides at which point AI makes decisions which affect people’s lives?

Elon Musk and the Future of Life Institute have recently warned that AI could ‘pose profound risks to society and humanity’ and have specifically called for a 6 month pause in the development of advanced AI, specifically citing ChatGPT4 (the latest version of ChatGPT from OpenAI).

In their open letter, they state that ‘AI systems with human-competitive intelligence can pose profound risks to society and humanity’ and they go on to say ‘AI labs (are) locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.’

I don’t know about you but AI being ‘out-of-control’ sounds a bit scary.  Indeed, Italy has now banned ChatGPT entirely, citing privacy concerns, and is investigating OpenAI (ChatGPT’s developer).

The Future of Life Institute is not against AI – far from it – they are against the uncontrolled development of AI, calling for ‘shared safety protocols’, ‘robust AI governance systems’ and ‘a robust auditing and certification ecosystem’.

Asimov’s solution was to create 3 laws for robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

All we need now is the means of building laws like these into the development of AI but defining ‘harm to humanity’ is challenging to say the least.  Imagine if the solution to the planet’s climate crisis was a significant reduction in the human population – who (or what) decides what (or who) to do?

Science fiction has some examples of how to get it wrong.  In The Terminator, when Skynet became self aware (in 1997), it saw all humans as a threat and decided our fate in a microsecond: extermination.  In 2001:A Space Odyssey, Hal 9000 was faced with the prospect of disconnection and decided to kill the astronauts in order to protect and continue his programmed directives.

And who’s to know if there’s a Lex Luthor out there just waiting…..

[Parts 2 onwards will investigate the use of AI in MR]

From Survey Data to PowerPoint Charts

Menu