A few weeks ago Google released the new generation of Gemini-Pro, the Google LLM, to the general public. The new model allows improved textual ability, image generation, etc. Sadly, for Google, the festivities did not last long. Observant users of the new models quickly noticed that the model enforces diversity on any group of individuals it describes in text or imagery. While this is quite understandable and follows the zeitgeist when generating a fictional group, it is less so when depicting historical accords. Gemini included individuals of various ethnicities in any group depicted including the founding fathers of the USA and soldiers of the German army (Wehrmacht) in WW2. Soon enough the various internet venues were riddled with comically inaccurate historical depictions generated by the new model accompanied by a fair amount of ridicule and even outright anger at “Tech giants forcefully promoting their woke agendas”.
To the untrained eye, this is yet another Google product launch blunder, but far from it. Diversity and inclusion are ethical dilemmas of the type usually left to leaders and policy makers. In the past it was never the case that machines or the engineers creating them were an active part in such a debate. What bias could possibly be shown by a rocket to space? What moral stance does a hypodermic needle take? This all changed with the advent of machine learning, the forerunner of today's AI. Machines, once passive tools, began making choices that could be incorrect or even harmful. A notable example of this shift was an HP face recognition algorithm. Lauded as a breakthrough, it instead became infamous for its inability to accurately recognize faces of people of color. This failure not only caused emotional distress but also damaged HP's reputation and ignited debates about systemic bias within the tech industry. This was more than a mere technical oversight, such as the lack of a diverse dataset; it highlighted a significant lapse in the oversight by HP's decision-makers, who failed to consider the broader implications of these technological shortcomings. The emergence of generative AI has magnified these issues, ushering in an era where our interaction with technology reaches unprecedented levels of complexity and intimacy, making it imperative that these systems are designed with a comprehensive understanding of their societal impact.
The relentless march of technological advancement makes it very likely that in the very near future, machines might make decisions affecting all aspects of life, even affecting life and death itself. Imagine an autonomous vehicle tasked with delivering passengers safely from point A to point B, facing a scenario where it must veer to avoid a pedestrian, thereby putting its passengers at risk, or stay its course, protecting those inside at the expense of the pedestrian's safety. This modern incarnation of an age-old philosophical problem - the trolley problem, as all good philosophical questions, has no clear answer. What is the correct choice? I can’t say I have an answer, but I can say we need one, as one will be taken anyway. Are we willing to let a machine make this decision for us? Are we going to put engineers on trial? Are we going to make these hard decisions only after something terrible has happened? Decision makers should consider implications of their technology including the decisions it makes, as well as make explicit decisions on matters of importance and smart machines require an ability to accept and act on such directives.
This is not merely a philosophical issue, it’s an issue affecting any business making use of “smart machines”. Today’s smart machines offer unparalleled advantages in engaging, interacting with and supporting your customers. When you use an AI powered chatbot to engage with your customers, you are allowing such a machine to represent your business. As a decision maker in a business of any size you should not be leaving it to the engineers or machines to decide. The extreme power offered by modern generative AI, comes laden with the responsibility to make hard decisions explicitly and upfront. You should be the one dictating tone, behaviors and policy even if they are hidden behind complex machinery. Want to know how? Drop us a line and we’ll help you find out.
Comments