AI:the debate

 

Late last year, we attended the IPA’s ‘The Big AI Debate’, where the industry gathered to share their thoughts on AI- both negative and positive. Since then, it seems like there’s a new AI related headline every week. With the sheer potential it holds, it’s created two mindsets within us:

1) is the caution and apprehension that AI could overtake human capability.

2) is the fact that AI could become a very valuable tool- aiding every industry.

To recap: AI is an umbrella term that encompasses a range of smart technologies that can learn and improve on their own. 

Marketing and IT leaders surveyed by data platform Lytics said 66% of marketers plan to integrate AI into their marketing stack soon. But how fast will that happen? Well, last year, the Department for Digital, Culture, Media & Sport (DCMS) found that 15% of UK businesses have adopted at least one AI technology already. This  translates to 432,000 companies.

It was also  projected that by 2025, that 15% figure will rise to 23% and by 2040, 1.3 million businesses would be using AI technology in the UK. 

According to a new paper from OpenAI, OpenResearch, and the University of Pennsylvania, jobs as we know them are destined to be altered dramatically by AI technology. The research specifically explored how GPT (generative pre-trained Transformer, the model allowing computers to respond with human-like text, including question answering, summarisation and translation) would impact all occupations currently run by humans.

OpenAI’s Tyna Eloundou stated: “Our findings indicate that approximately 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted.”

But the question is- is this necessarily a bad thing?

AI systems have the capability to  liberate human attention. There are only so many things we can attend to during the course of a day, which makes attention a valuable human resource. AI models such as ChatGPT3 - or conversational AI more generally - will expand our attention by proofreading, summarizing, and creating documents at lightning speed compared to humans, freeing us up to accomplish other tasks.

Generative AI will create customized marketing pitches automatically using existing documents, videos or other sources of data. On the operations front, AI will support customers through a natural dialog that simulates human operators, reducing the need for humans in such roles.


Over in Japan, Farmship’s AI system uses photographs to estimate the height, width, and weight of seedlings in order to predict future growth of produce. It then will eliminate seedlings that are not performing well, whilst identifying which will grow the most, making harvesting more efficient.

The startup was able to increase the ratio of seedlings growing properly to 80%, in contrast with 54% in standard practice, contributing to 17% more yields. Their current trial system relies on people to photograph seedlings and replant them, however Farmship is working to fully automate the process within the next two years with robots in “vegetable factories.”

Indoor facilities which usually use vertical farming systems – or vegetable factories – are projected to stabilise production and improve quality, as they will be less affected by weather and other extraneous circumstances. Barriers to the growth of these facilities include cost and labour, as sorting seedlings is the second-most time consuming task after harvesting, accounting for about a twelfth of cultivation costs. Farmship aims to significantly reduce these costs and automate production through their AI and robotics technology.

In the UK, it has been found that  AI is now more effective than doctors at picking out transplant organs. The team behind Organ Quality Assessment (OrQA) believes the system could save lives and “tens of millions” of pounds.

Colin Wilson, transplant surgeon at Newcastle upon Tyne Hospitals NHS Foundation Trust and co-lead of the project said, “Transplantation is the best treatment for patients with organ failure, but unfortunately some organs can’t be used due to concerns they won’t function properly once transplanted. The software we have developed ‘scores’ the quality of the organ and aims to support surgeons to assess if the organ is healthy enough to be transplanted. Our ultimate hope is that OrQA will result in more patients receiving life-saving transplants and enable them to lead healthier, longer lives.”

The National Institute for Health and Care Research (NIHR) has invested £1m in funding to develop the technology, which works in the same way as facial recognition to evaluate the quality of an organ.

It’s believed the tech could result in up to 200 more patients receiving kidney transplants and 100 more receiving liver transplants every year in the UK.


So, what do we conclude?

A whole new world is emerging much faster than we had anticipated even a year ago. AI will have a major boost on productivity, but in the process, it could displace large numbers of humans whose jobs rely on their perceptive or intellectual abilities. But if we take the above, government bodies and brands are placing ethics into the way they use AI- even though it is rapidly growing, AI will be regulated soon enough.

Additionally, we are  still in the experimentation stage of using AI, but that we agree that it can be used to streamline manual processes, making our work faster & efficient, so that humans have more time to focus on the challenges and fun parts.  It is becoming another medium to play around with in terms of generating concepts and ideas, and unleashing human creativity rather than the AI doing it for us.


Behind every AI system is or was a human telling it what to do- and that creativity and passion cannot be replaced by AI, because it is not human. 



 

These seem like very good things for crucial aspects of both farming and the health industries. And whilst the accuracy is mind-boggling and at times, scary, it seems we are now at the point where AI is already being integrated into society.

And this comes with the question- will AI replace humans?

AI becomes more and more intelligent as it learns, from US. And in terms of marketing, Intelligent machines will continue to improve on this front, generating more data as it interacts with more and more people more often; it’s the ultimate in one-to-one marketing, seeing how people react to the content it generates from a brief.

We’ve already seen how AI could automatically create new kinds of materials – images, videos and stories – tailored for particular individuals and groups. This then, could replace the millions of people working within creative industries- and it’s a fear many hold.

However, in the IPA’s Big AI debate, the panel were confident that AI could not replace human creativity.

 OpenAI chief executive Sam Altman, discussed this in a recent interview: "If you asked people 10 years ago, what they thought the main application of AI would be – they likely would have told you that it would be coming for the blue collar jobs first. Factories, self-driving cars, deliveries. Then it would come for 'low-skilled' white collar work. Then programming. Then finally – creative work. But the inverse of this has happened."

Ultimately the work it produces is – by its very nature – as average as you can get. As Sam Altman notes, the "AI is essentially the equivalent of an average human". 

As one person in the crowd at the debate last year said, ‘I think AI can do ASDA’S buddy the Elf, but it couldn’t do the Cadbury Gorilla’.

All in all, meaning AI is a powerful extension of our human capabilities, not a cheap replacement ChatGPT and AI tools are changing the creative process, but they aren't changing the creative output, nor the final products.

And whilst we grapple with these big questions that don’t have a definite answer yet, governments and lawmakers are facing the consequences of AI & copyright.

For example, Several ongoing lawsuits have raised legal concerns around the use of these AI-generated images of human models. Questions like who truly owns these images and if they might infringe on existing copyrighted works are compounded by the rapidly blurry line between reality and fiction. 

Currently, Getty Images, known for its historical and stock photos, has sued AI image generation Stability AI, the maker of Stable Diffusion, for copyright infringement. Getty alleges that the company copied over 12 million of its images to train its AI model 'without permission or compensation'.

To avoid repeating such a scenario, creative software companies like Adobe have started to address the issue. They recently introduced Firefly, a generative AI tool, that will introduce a “Do Not Train” tag for creators who do not want their content used in model training. 

In addition, to avoid the legal minefield, brands like L'Oréal have established a robust framework for the ethical development and use of AI systems in their marketing mix. 

They have outlined a structure and policies to mitigate the risk of bias and privacy with the use of AI models taking the UN guiding principles into account. 

In addition, the UK government has published a white paper detailing how it plans to regulate artificial intelligence.

The government notes that AI, which it describes as a “technology of tomorrow” contributed £3.7bn ($5.6bn) to the UK economy last year, and AI advocates noted that it is already delivering many commercial, economic and social benefits.

However, there is still the underlying fear that the rapid growth of AI could threaten jobs or be used for malicious purposes. 1,100 signatories including Twitter/Tesla's Elon Musk, Apple's Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” 

The letter, published last month, by the nonprofit Future of Life Institute, adds that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”
Instead of creating a new regulator, the government says existing regulators, including the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority, should come up with their own approaches that suit the way AI is actually being used in their sectors.

The white paper outlines these five principles:

  • Safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed.

  • Transparency and "explainability": organisations developing and deploying AI should be able to communicate when and how it is used and explain a system's decision-making process in an appropriate level of detail that matches the risks posed by the use of AI.

  • Fairness: AI should be used in a way which complies with the UK's existing laws, for example on equalities or data protection, and must not discriminate against individuals or create unfair commercial outcomes.

  • Accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes.

  • Contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI.

Previous
Previous

How to use ChatGPT in your marketing strategy

Next
Next

BeReal: a success or flop for the industry?