AILeave a Comment on Why responsible AI adoption matters for your brand’s reputation

Why responsible AI adoption matters for your brand’s reputation

Every week, new AI tools and use cases hit the market. For branding and marketing teams, this can be an exciting prospect, as new ways to work and collaborate are discovered, leading to dramatic time and cost savings and turbocharged creative capacities. 

At the same time, however, the rush to invest in or use free online AI solutions can backfire if care isn’t taken, with potentially huge consequences for teams and their businesses. 

Amongst the new AI tools on the market, Generative AI (GenAI) is particularly important for brand marketing. As with the popular ChatGPT and Midjourney tools, GenAI allows users to describe tasks and let powerful computers get on with generating outcomes. 

These could be AI generated images and brand assets, customer support messages, or new campaign ideas.

Forecasting suggests this market for GenAI is set to boom in the next decade. For brand teams interested in crafting iconic and trusted brands in the 2020s and beyond, the time for getting to grips with these technologies is now.

AI, brand reputation and trust

A survey of communications professionals found that, while almost 86% were optimistic about the potential of AI, 85% were also concerned about the legal and ethical issues.

AI adoption creates opportunities but also seeds new challenges, problems, pitfalls and risks. Customers are curious, but also anxious about what the implications of these new technologies will be for their lives.

Over the coming years, how companies use their AI tools will have a direct impact on their reputation, how much customers trust them, and how markets treat them. 

Modern brands should be aiming to use these new technologies to create real value for customers, businesses and society. It starts with knowledge, understanding, and careful planning. 

Establishing trust in uncertain times

Customer trust has long been understood to be at the core of successful branding. As consumers we simply like to spend our money with brands that we believe in. Research also shows that customers who trust a brand are three times as likely to forgive product or service mistakes.

When it comes to adopting AI tools, it’s therefore important to ask yourself the question – is our company using AI in a way that builds customer trust? Or could our choices be doing the opposite?

Sparebank found this out the hard way, when it came to light that the Norwegian bank had used an AI generated image without being labelled as such. 

This broke legislation on misleading marketing, which requires that subjects used in ads be real users of the product or service. It also potentially contravened Norwegian regulations on image manipulation, which require that images that have been airbrushed or edited are clearly marked in order to reduce pressure that could lead to shame or body dysmorphia. 

The result was a media storm, in which Sparebank were forced to publicly admit their mistake and promise to take more responsibility in future.

The lesson? New capacities created by AI tools might seem great on paper, saving time and money and helping to bring new creative ideas to life. However, if they contravene legislation or prevailing social norms, the best intentions can quickly backfire. 

Respecting privacy with AI technologies

How many people are currently using ChatGPT at work, unaware that information entered into its prompt box is technically in the public domain? 

With most companies building their AI tools on the back of third-party machine learning algorithms, complex issues are raised around data protection and privacy. Without proper assessment and training, well-meaning employees may end up breaching GDPR and other data-protection regulations without realising. 

Until regulators and legislators catch up with AI technologies and provide clear and unambiguous guidelines, this is a potential minefield for brand reputation. 

Companies need to take care not to intrude into their customer and employee’s private lives in ways that overstep reasonable boundaries. 

Consider that, as tools get more powerful, brands will be able to advertise and persuade us with increasingly subtle and powerful strategies. Where is the line drawn between personalised, data-driven marketing and outright manipulation? 

Or consider that there is at least one AI wellbeing tool in development that purports to allow companies to track productivity alongside employee wellbeing. All good – but what if the algorithm shows that employee productivity drops beyond a certain degree of wellbeing?  

These might be speculations, but they could very soon become realities. As the famous theorist Paul Virilio once remarked,  “the invention of the ship was also the invention of the shipwreck.” 

Companies need to tread carefully to ensure that good intentions don’t accidentally lead to intrusive or manipulative practices, which, once publicly exposed, will meet with an understandable and expected backlash. 

Implementing ethical AI solutions 

With all this said, what can companies do to minimise the risk and maximise the value that AI can contribute to customers, employees, and society?

We can begin with a simple principle of humility. Despite our best attempts to guess, no-one knows for certain what the impact of AI will be. As we saw with Sparebank, what likely began as a reasonable business intention – “let’s use these new tools to save time and money” – quickly turned into a public scandal. 

Sparebank quickly admitted it got it wrong, which may in the long run work to its favour. In times of uncertainty and change, transparency and honesty go a long way towards (re)building trust. 

Brand teams should keep this in mind. Over the coming years, more companies are likely to have their reputations tested as they experiment with AI technologies. The most successful will find ways to innovate, while maintaining respect for their customers and sensitivity to when ethical lines are crossed. 

Creating an ethical charter is one way that companies can ensure their intentions are aligned with positive societal outcomes. An ethical charter defines clear values for how AI should be used, providing a framework for decision making when boundaries get murky and regulations aren’t much use. 

Papirfly’s ethical charter, for example, covers four major principles:

  • Be a good corporate citizen when it comes to the rightful privacy of our users
  • Ensure we act in an unbiased manner – always – as we’d expect to be treated too
  • Build in the highest level of explainability possible, because output is important
  • Overall, our task is simple – we must build technology that is designed to do good

Within each of these principles are further specific guidelines for how AI should be built and used within our business. 

Naturally, ethical charters will vary from company to company to reflect their specific needs and markets. The aim should be to create a strong company culture, laying the foundations for ethical decision making and a reputation that customers can always trust. 

Towards an AI powered future

Artificial intelligence depends on responsible humans making clear decisions within strong ethical frameworks. 

To learn about how Papirfly is ethically innovating the challenges of branding and AI, check out these links. 

At Papirfly, we are committed to using AI to enhance every user’s experience, all while continuing to empower the world’s biggest brands with our all-in-one brand management platform. 

Learn about how Papirfly is ethically innovating the challenges of branding and AI.

AI, Product, Thought Leadership

AI beyond the hype – Staircases to (AI) Heaven and Hell

We’ve looked at a number of areas where Artificial Intelligence will drive real and meaningful change in this AI-Illuminate series. We’ve looked at how Hollywood has convinced us of eternal doom, we’ve considered how machines will rid us of meaningless tasks, and we’ve discussed ways that Machine Learning might not build Society 2.0 as equitably as we’d like.

But as with all situations, there are two sides to the coin.

The Staircase to (AI) Hell

Let’s start with the depressing take on the journey ahead.  Introducing the Staircase to (AI) Hell.

Beginning with ‘simple automations’ doesn’t feel that scary. Human beings are inherently lazy, we don’t generally like repetitive things, and if there’s a faster way to do something we’ll usually opt for it. Enter the robots! It’s easy to envisage a world where anything even remotely repetitive is simply done by a machine.

Even as we move to the next ‘step’ of the staircase, and we start to see some ‘low priority’ jobs being replaced, most people have little-to-no concern yet. Perhaps because most people discussing the AI debate right now consider their own jobs to be higher priority.

As we approach the step where Deep Learning can do a lot of things better than humans, we end the ‘light blue’ section of this staircase and start entering darker territories.

The first real grey area is the point in time where Deep Learning transfers billions of tasks from humans, replacing hundreds of millions of jobs. We’re no longer ‘just’ talking about the jobs most people think are not theirs – we enter a period of wholesale change with white-collar and blue-collar jobs equally threatened. What do the newly-unemployed do? How do they survive?

As we enter the ‘dark blue’ steps of the staircase we see General Artificial Intelligence (AGI) surpassing most abilities of most humans, which then leads to the point of ‘Singularity’ where machines become too powerful for their human creators to control.

At this point it really is humans vs. robots and, by all accounts, we don’t look set to win.

It is somewhat depressing.

Who can save us?

The Staircase to (AI) Heaven

I believe we can, as is common with many of the debates around AI, look to the past for our saviour. Isaac Newton to be precise. In Newton’s Third Law, he stated that for every action in nature there is an equal and opposite reaction. So perhaps we can turn the Staircase to (AI) Hell into a Staircase of (AI) Heaven? What could that look like?

Well, in Newton-friendly terms, it’s equal and opposite. When we flip the pyramid upside down, we start with the same simple automations that help us humans not have to do the boring things we don’t want to do. This, in itself, can only be a good thing. More efficiency can definitely help us focus on other things. It could likely also be part of the solution to some of our big global problems like waste and the distribution of equitability.

As we progress through the next two steps, there’s a positive to each too.

Complex AI replacing ‘low priority’ jobs is fine if what we mean by ‘low priority’ jobs are jobs humans are not very good at, where it’s dangerous to their health, or where we expend resources doing things unnecessarily. As long as we start to migrate those same displaced people into new, better, roles and / or find ways to replace their income.

Likewise, where AI can do things better than humans, let’s use AI. Of course it makes sense. If a machine is 10x more accurate at doing something, let the machine do it. Where a human+machine combination excels, like in the visual detection of some cancers, then let’s make it happen. Again, we just must not forget to plan for the displaced. Is it time to look at Universal Basic Income (UBI) models again, for example?

Where that displacement of jobs becomes wider and deeper, we do need to be ready. If AI is set to change a billion jobs within the decade, as some academics predict, our policymakers, lawmakers, and politicians need to be working on Plan B now. If we are to leverage the opportunity of the technologies we have created, we need to be ready.

We’ve been here before. The agricultural revolutions of the 19th Century – forever changing what farm labour looked like through the introduction of machinery – are the very reason we’re all able to sit here and read this article when we’d otherwise be out bringing in the harvest so our families could eat. I love the countryside but I’m very grateful for the historic jobs displacement that means I don’t need to grow my own wheat every year.

Daring to dream of AI’s future

The next few steps on the Staircase to AI Heaven are not filled in yet. We don’t know what the future will hold – but we can dare to dream…

Eradicating waste. Making human error a thing of the past, being able to predict things perfectly. Transforming health outcomes. Increasing quality of life for everyone. Curing cancer. Moving beyond fiat money. Driving equitability. Understanding where we come from. Closing the income gap. Working globally as one. People living longer. The end of discrimination. Solving the climate crisis.

It might not all be possible – and certainly not within our lifetimes – but it’s a wonderful AI Heaven to believe in.

Watch our on-demand webinar

AI, Product, Thought Leadership

AI beyond the hype- our Ethical Charter

In part 3 of this series we explored the Three Pitfalls of AI – privacy, replication, and bias. Each pose significant threats to how we live and work today, barriers to mass adoption of machine intelligence, and complex questions about safety and regulation.

That said, the opportunities AI promises are equally significant. We are likely at the start of a fourth industrial revolution and – even if we don’t know exactly how yet – artificial intelligence is going to change a lot (if not everything).

Operating in a new world without new regulation, the onus on companies like Papirfly – building the technology of tomorrow – to self-regulate becomes critical. Good corporate citizenship, acting responsibly, and pursuing opportunities ethically all require guidance and leadership.

To help our people make promises to our customers and our users about how we’ll build our software, we have created our Ethical Charter.

Comprising eight action statements within four ethical themes, it governs how we – as a company – will build technology, and it sets out a pledge for how we will put users at the heart of doing so.

Today we publish it openly.

Papirfly’s Ethical Charter

Be a good corporate citizen when it comes to the rightful privacy of our users

1. We must always obey local, regional (including GDPR), and international privacy laws. Beyond the question of legality, we must always treat users ethically too. This includes creating AI applications that do not invade their privacy, do not seek to exploit their data, do not collect any data without express (and understood) consent, and do not track users outside of our own walled garden. We do not need data from the rest of their activities, so we should not seek to obtain and use it.

2. We do not, as a hard rule, use data to create profiles of our users to facilitate negatively scoring, predicting, or classifying their behaviours. We must never use their personal attributes or sensitive data for any purpose. Neither of these tactics are required for us to make better software for them (which is what we are here to do) and so it is inappropriate. We must always understand where our ethical red line is and ensure everything we do is on the correct side of that.

Ensure we act in an unbiased manner – always – as we’d expect to be treated too

3. We acknowledge that there can be unacceptable bias in all decision making – whether human or machine based. When we create AI applications we must always try to eliminate personal opinion, judgement, or beliefs; whether conscious or otherwise. Algorithmic bias is partially mitigatable by using accurate and recent data so we must always do so. Remember, a biased AI will produce similar quality results as a biased human – “garbage in, garbage out” applies here, always.

4. We must use AI to augment good and proper human decision making. We do not want, or need, to build technology to make automated decisions. As in other areas of our business, like recruitment, we have not yet proven the strength of affirmative action (sometimes called positive discrimination) and, so, mathematical de-biasing is not considered an option for us. As such, all decision-making inside any application must include humans. Their skillsets, experience, and emotional intelligence can – and should – then be added to by AI.

5. We work to the principle of “you get out what you put in” and understand that in order to build technology for the future we can neither only look to the past (using out of date data, for example) nor build AI on top of existing human biases. Gender, ethnicity, age, political and sexual orientation bias (this list is not exhaustive) are all discriminatory and we must proactively exclude this human trait of today and yesterday in our search for technology solutions of tomorrow.

Build in the highest level of explainability possible, because output is important

6. We are not interested in only building black box solutions. If we can’t create defendable IP without doing so then we’re not doing our jobs properly. We want to, wherever possible – and always when possible – be able to explain, replicate, and reproduce the output of a machine we have built. We owe this to our users and it’s also how we’ll get better at what we do. The better we understand what we are building the quicker we can evolve it.

7. We actively subscribe to the “right to explanation” principle championed by Apple, Microsoft, and others. We must build applications that give users control over their personal data, determine how decisions have been made, and be able to easily understand the role their data has in our product development. We can do this without affecting our ability to defend our IP and, therefore, should do so as a default. Whilst full replication is not always possible (within deep neural networks, for example) our mission – and policy – is to do as much as we feasibly can.

Overall, our task is simple – we must build technology that is designed to do good

8. Technology is a wonderful and powerful thing. As a software company, we must believe that. But behind any, and every, application for good there are usually opportunities for evil too. As we depend more and more on AI it will take on a bigger role inside our organisation. As we craft and hone it, it is our responsibility to put ethics at the forefront and build responsibly. For now, we are our own regulators. Let’s be the best regulators we can be.

Moving forward with ethics at the heart of AI innovation

At Papirfly we have defined an Ethical Charter that governs what technology we build, how we build it, and the ethical parameters within which we build it. In Part 5 of this series we’ll provide a comprehensive analysis of various scenarios related to AI, highlighting both the benefits (“heavens”) and potential drawbacks (“hells”). This balanced perspective will present a clear view of AI’s capabilities and limitations.

Watch our on-demand webinar

AI, Product, Thought LeadershipLeave a Comment on AI beyond the hype – The 3 pitfalls of AI

AI beyond the hype – The 3 pitfalls of AI

You don’t have to look far to feel the air of AI-scaremongering. Robots coming for our jobs, AGI (General Artificial Intelligence) surpassing human ability, reaching the point of ‘Singularity’ – the hypothesis that AI will become smarter than people and then be uncontrollable, and the end of humankind as we know it. Movies like The Terminator, Minority Report, and Ex Machina have Hollywoodified Earth’s surrender to technology for years now.

When you ask corporate leaders about the risks AI present to their organisations, you tend to get similar answers based on similar themes:

  • What about copyright and intellectual property?
  • What about job displacement and human capital?
  • Are we at the start of a machine-led world?
  • Are we at risk of machine intelligence replacing human intelligence?
  • How do we compete against infallible machine intelligence?
  • How do we move fast enough to mitigate the risk of being out-run?
  • Who wins in the end – us, our competitors, new entrants, or AI generally?
  • Who can be trusted to regulate this new world?

Each of these questions can be unravelled to create opportunity alongside risk, but many leaders currently find it hard to differentiate. There is so much noise. There is still a talent gap – it’s estimated we need 10x the computer science graduates we have today in order to meet the hiring plans already announced by major software companies. The rapidity of change is increasing faster than existing operational models, such as fiscal years or quarterly reporting, were designed for.

This means, in all likelihood, that we must look elsewhere for a rational and unencumbered view. Let’s look at what academics have reached agreement on.

Academics don’t routinely agree with each other but for the past decade thought leaders from the biggest and best technology education institutions (including Massachusetts Institute of Technology [MIT], University of Oxford, Stanford University, Indian Institute of Technology, National University of Singapore, etc) have settled on the Three Pitfalls of AI as being Privacy, Replication, and Bias.

Privacy

Defined as the ability of an individual or group to seclude either themselves (or information about themselves), and thereby be selective in what they express, privacy is a phrase we’re all familiar with. The domain of privacy partially overlaps with security, which can include the concepts of appropriate use and protection of information.

But the abuse of privacy can be more abstract. Consider the (existing) patents that connect social media profiles with dynamic pricing in retail stores. The positioned use case is usually a discount presented to a shopper because the retailer’s technology knows – from learned social media data – that they are likely to buy if presented with a coupon or offer. This feels win-win for both parties and therefore the data shared feels like a transactional exchange rather than a privacy intrusion.

However, where that same technology can be used to inflate the price of a prescription for antidepressants – because the data tells the retailer’s system that the shopper is likely struggling with their mental health – it quickly becomes apparent that the human cost of privacy abuse could be very high indeed.

Privacy is considered one of the three pitfalls of AI because data (and so often personal data) is so intrinsically linked to machine intelligence’s success. The conversation around who owns that data, how that data should / should not be used, how to educate people about the importance of data, and how to give users more control over the data has been happening in pockets (but far from all) of society for a long time. As AI advances, it’s widely acknowledged that this area has to evolve in tandem.

Replication

The inability to replicate a decision made by Al – often referred to as a ‘black box’ – occurs when programmers and creators or owners of technologies do not understand why their machine makes one decision and not another.

Replication is essential to proving the efficacy of an experiment. We must know that the results a machine produces can be used consistently in the real world, and that they didn’t happen randomly. Using the same data, the same logic, and the same structure, machine learning can produce varying results and / or struggle to repeat a previous result. Both of these are problematic – and can be particularly troublesome when it comes to algorithms trained to learn from experience (reinforcement learning) where errors become multiplied.

The ‘black box’ approach is often excused by claiming IP protection or ‘beta’ status of products. But the prolonged inability to interrogate, inspect, understand, and challenge results from machines leads to an inability for humans to trust machines. Whether that’s confusion about how a lender has credit-scored your mortgage application, or something even more serious like not being able to prove that a prospective employer has used machine learning to discriminate against a candidate.

Replication is considered one of the three pitfalls of AI because we need to know we can trust AI. For us to trust it we need to be able to understand it. To be able to understand it we need to be able to replicate it.

Bias

“A tendency, inclination, or prejudice toward – or against – something or someone” is how bias is usually defined. Today, Google has more than 328m results when you search for “AI bias”. Unfortunately AI and bias seem to go hand-in-hand with a new story about machine intelligence getting it (very) wrong appearing daily.

As the use of artificial intelligence becomes more prevalent, its impact on personal data sensitive areas – including recruitment, the justice system, healthcare settings, and financial services inclusion – the focus on fairness, equality, and discrimination has rightly become more pronounced.

The challenge at the heart of machine bias is, unsurprisingly, human bias. As humans build the algorithms, define training data, and teach machines what good looks like, the inherent biases (conscious or otherwise) of the machines’ creators become baked-in.

Investigative news outlet ProPublica has shown how a system used to predict reoffending rates in Florida incorrectly labelled African American defendants as ‘high-risk’ at nearly twice the rate it mislabeled white defendants. The system didn’t invent this bias – it extrapolated and built upon assumptions programmed by its creators.

Technologists and product leaders like to use the acronym GIGO – ‘Garbage In, Garbage Out’ – and it absolutely applies here. When we train machines to think, all of the assumptions we include at the beginning become exponentially problematic as that technology scales.

Replication is considered one of the three pitfalls of AI because technology is often spoken of as being a great ‘leveller’, creating opportunities, and democratising access. But so long as AI bias is as bad as, or worse than, human bias, we will in fact be going backwards – with large sections of society disadvantaged.

Responding to AI’s challenges

Each of these Three Pitfalls of AI are serious and they have attracted a lot of attention – including from the leaders of the very companies at the forefront of AI’s development and evolution. When more than 1,100 CEOs wrote the now-infamous open letter calling for a halt to AI development they were essentially asking for time for humans to catch up and think about the possible consequences of our actions.

There are further questions about regulation – with lawmakers struggling to keep up with the rate of change. Trust of politicians remains stubbornly low globally and the public is also hesitant to trust a small group of technology company billionaires with what realistically could be existential threats to parts of how we live today. But self-regulation is where we are at currently and that means it comes down to individual technology creators to build responsibly and ethically.

At Papirfly we have defined an Ethical Charter that governs what technology we build, how we build it, and the ethical parameters within which we build it. In Part 4 of this series we’ll share our Ethical Charter and demonstrate how it works in our company.

Watch our on-demand webinar

AI, Product, Thought LeadershipLeave a Comment on AI beyond the hype – “AI was invented in December 2022…right?”

AI beyond the hype – “AI was invented in December 2022…right?”

Cast your mind back to the end of last year and think about your Instagram, Facebook, and Twitter (as it still was) feeds. If they were anything like mine they were likely full of friends’ AI-generated photos. Or, at least, the ones that made them look smarter, prettier, taller, thinner, etc. Generative AI had exploded into the mainstream.

You could be forgiven for thinking it was invented around then too – and are possibly surprised to know that Artificial Intelligence is as old as the aunties and grandmothers who asked you about it around the dining table at Christmas 2022.

Putting AI to the test

AI is around 70 years old. Its roots can be traced back to Alan Turing (of ‘The Imitation Game’ fame), the British WWII codebreaker. Turing was a leading mathematician, developmental biologist, and a pioneer in the field of computer science. His earliest work created the foundations for AI as we know it. His eponymous test, The Turing Test (created in 1950), tests a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation was a machine, and all participants would be separated from one another.

The conversation would be limited to a text-only channel, such as a computer keyboard and screen, so the result would not depend on the machine’s ability to render words as speech. If the evaluator could not reliably tell the machine from the human, the machine would be said to have passed the test. The test results would not depend on the machine’s ability to give correct answers to questions, only on how closely its answers resembled those a human would give.

A thought experiment

The Golden Age of AI followed, spanning roughly 1956-1976. During this period, scientists and researchers were optimistic about the potential of AI to create intelligent machines that could solve complex problems by matching human intelligence – or even surpassing it.

Whilst the era fizzled out it delivered many a ‘first’ that still holds value today. ChatGPT’s ‘great-great-grandmother’ could be considered to be ELIZA – one of the first chatbots (then called ‘chatter bots’) and an early passer of The Turing Test, which was created from 1964 to 1966 at MIT by Joseph Weizenbaum.

Moving into the next decade, John Searle (a prominent American philosopher) set the tone with his Chinese Room Experiment theory. Searle proposed the Chinese Room Experiment as an argument against the possibility of Al, aiming to illustrate that machines cannot have understanding. 

Searle uses the following scenario to demonstrate his argument:

“Imagine a room in which a man, who understands no Chinese, receives, through a slot in the door, questions written in Chinese. When he receives a question, the man carefully follows detailed instructions written in English to generate a response to the question, which he passes back out through the slot. Now suppose the questions and responses are part of a Chinese Turing Test, and the test is passed”.

Chess and penguins

The years that followed this ’downer’ of a start to the 1980s were low in ambition and confined to what we now look back as ‘Behavioural AI’. Knowledge based systems, sometimes called ‘expert systems’, were trained to reproduce the knowledge and / or performance of an expert in a specific field. They mostly used the “if this then that” logic flow and they didn’t always get it right – with the identification of penguins (birds but flightless birds) being an oft-cited example of basic errors of the time.

This era produced a few big wins – especially in the efficiency space, like Digital Equipment Corporation’s ‘RI’ application which saved it $40m per year by optimising the efficiency of computer system configurations. But it was prior to the advances of computerised automation which really made corporate adoption commonplace. It’s also acknowledged to be the period of time that birthed the first bias in AI.

A lot happened in the world of AI in the 1990s – seeing major advances in defence, space, financial services, and robotics. So it’s perhaps surprising that most AI historians and computer scientists point to the same turning point for machine intelligence. In 1996 ‘Deep Blue’, a chess-playing computer from IBM, beat then-champion Garry Kasparov. Prior to this, chess had been singled out as a ‘frontier’ for machine vs. human intelligence, with many people believing the human brain to be the only one capable of mastering a game with between 10¹¹¹ and 10¹²³ moves. (A ‘googol’, being the inspiration behind Google’s name, is 10 to the 100th power, which is 1 followed by 100 zeros). Machine intelligence had arrived.

A new era emerges

As the use of computers in domestic settings proliferated, there was an exponential surge in Internet usage in the mid-1990s, with the last few years of the decade renowned today for the dotcom bubble (1995–2000) and its ultimate implosion. Throughout this time AI took a backseat in social contexts, despite already starting to power many consumer applications and early-version software, websites, and applications. Commercially the focus was on automation and efficiency. Neither of which were particularly “sexy” or fun.

Enter…the self-driving car. A longheld obsession and science fiction staple, the period between March 2004 and October 2005 was to become the start of a whole new age. The DARPA Grand Challenge was a competition for autonomous vehicles funded by the Defense Advanced Research Projects Agency, the research lead within the United States Department of Defense. The race saw 21 teams, each with their own self-driven vehicle, prepare to compete in a race spread out over 150 miles / 240km.

A grand total of zero entrants finished the race in 2004. But in the 2005 race, five vehicles successfully completed the course. Of the 23 entrants, all but one surpassed the 7.32 miles / 11.78 km distance completed by the best vehicle in the 2004 race. The winner on the day was Stanley (named by its entrants, the Stanford Racing Team) but the overall winner was AI itself, with optimism levels rallying and the machine intelligence conversation building in reach and volume.

Humanoid robots and sci-fi dreams

In the late 2000s, AI entered its ‘modern era’. A number of humanoid robots brought AI closer to science fiction, driverless car projects became abundant, AI was being built into consumer and commercial applications, and the Internet of Things (IoT) emerged – with the ratio of things-to-people growing from 0.08 in 2003 to 1.84 in 2010 alone.

The 2010s were really where we saw mass proliferation of AI in society. When we think about the mainstream tech we take for granted today much of it was born (or matured) in this decade. Virtual assistants like Siri. Machine learning tools. Chatbots capable of human-quality conversation. Mobile phone use cases. Photography aids. In-car innovations like satnav and cruise control. Smart watches. Smart appliances. Real-time share trading platforms that everyone can use, not just financial giants. Even the humble product recommendation engine. They all use AI.

We arrived in to the 2020s with 70 years’ build-up in artificial intelligence, machine learning, and deep learning. The past few years have seen significant advances and the next few will undoubtedly too.

Embracing AI’s evolution

When you next ask ChatGPT to write that report for you, when you use Papirfly’s Generative AI to create hundreds of illustrations in an instant, or you think about the potential pitfalls of AI (we’ll address these in an upcoming article) do remember that we’re not dealing with a brand new toy here.

Instead, we are working with technology that started its journey in the 1950s. A journey that has seen an amount of change its early creators could never have predicted in their wildest dreams, and one that is likely to transform almost every aspect of human life in the next decade.

Watch our on-demand webinar

AI, Product, Thought LeadershipLeave a Comment on AI beyond the hype, – Adaptation and adoption

AI beyond the hype, – Adaptation and adoption

Papirfly has worked with AI for some time and has successfully implemented it in the production of illustrations for one of its customers. The company used the customer’s brand guidelines to train the AI to create new illustrations that are inline with the brand’s look and feel. This allows the customer to quickly generate new illustrations at a low cost and with a faster turnaround time compared to the traditional workflow of requesting and waiting for an illustrator. The AI-generated illustrations are still subject to manual approval.

The above intro text was written by ChatGPT. Our Product team took this text and asked ChatGPT’s AI for an executive summary on how to use AI to write marketing content as an experiment. We’ve all spent the last couple of weeks getting mind blown by the text and copy that a chat AI can write. But what now? The sharing hype is over, can this be used for anything useful?

AI: Looking deeper

At Papirfly we’ve been looking into AI use cases and the theory behind it for some time and similarly to others, we’ve been fascinated and impressed. AI has been, theoretically at least, hailed as the solution to many problems currently being dissected and analysed within “big tech”. It may seem obvious, but trying to implement the technology to fix those problems comes with its own challenges. It’s theorised that “AI can solve anything,” but we have some questions!

  • Where are the use cases that are suitable for it?
  • Why should AI solve those things?
  • Does the problem really need AI to solve it, or are we throwing technology at something because it’s “new and cool”?

Addressing the technology adoption curve

At Papirfly, we believe we’ve found a killer use case for AI (you could say we’re the “Innovators” on the technology adoption curve), but we need to help the technology find its rightful place where it can stand on its own in the product ecosystem, and support other companies in how to use the technology properly and responsibly.

What’s next on AI from Papirfly?

This short blog is Papirfly’s introduction to our longer series on AI, “AI: beyond the hype” where we’ll dig into more wide ranging topics on AI and how it might affect our customers and their customers. We want to ensure that we’re researching, analysing, and using AI in a responsible and sustainable way, and have some exciting use cases and thought leadership coming in the new year. Stay tuned!

Contributions by Natalie Wilding, Martin Pospisil, and Yngve Myklebust

Watch our on-demand webinar