How does the General Public Perceive and Feel about AI?
“Artificial Intelligence” being the recent hyped buzzword, we may be tempted to agree with tech filter bubbles and believe that widespread social acceptance is a matter of days – if not already been achieved. However, AI is still moving from being a futuristic and research-limited concept to an everyday reality for millions of people. Going from the circumscribed domains of academia and tech startups to everyday business and usage, requires a careful understanding of people’s sentiment and attitudes, their readiness level, and the overall concerns that may prevent trustworthy and sustainable adoption.
Thus, we may ask what is the true sentiment of the broad public towards AI? Is there homogeneity of such sentiment, or differences exist depending on AI applications, geographical areas, or position in industry? We have summarized the most recent independent surveys on the topic, observing that, for the average citizen, tech and ethics go hand-in-hand.
The AI-in-society landscape
Globally, people have mixed feelings about AI. There is an almost equal split between those who say they feel excited about it, and those who feel nervous. Impressively, the proportion of people who feel more concerned than excited has increased over the years (instead of reducing thanks to better awareness and access to information), primarily among the Anglosphere and continental Europe. Instead, Asia is where excitement is the highest and most stable.
Overall, awareness and proper knowledge are perceived as too low by most people. Only half of the interviewed subjects say they have average to moderate understanding of AI, how it works and which are its main applications.
Knowledge of AI is highest among the youngest, with peaks for the Gen Z and Millenials (reaching almost the same percentages), while the older generations report lower knowledge levels. Regional variations exist; for instance, in Europe, sensitive differences exist between northern countries and the Mediterranean and Balkan areas. This digital divide somehow correlates with the historical divide in the access to Internet, new technologies and digitalization; due to the accelerating potential of AI, there exist a risk that the gap may widen if not addressed properly, separating markets and societies.
Why so nervous?
Nervousness, and increasing nervousness above all, is tightly linked to ethical concerns. AI is widely expected to make disinformation worse; the risk of creating fake images and videos to hurt someone is one of the most concerning thoughts, paired with the average low confidence in being able to detect such fakes; moreover, there is little trust that companies will be able to protect – and care about –personal data.
Notably, even though they are the most enthusiastic and aware of AI matters, more than 80% of young people have at least one concern about it, the most pressing being privacy and management of personal data, as well as ethical problems and control of – or addiction by – technology. The risk of biases and discrimination is less of distress, as many respondents reported higher distrust towards other people than towards AI. Following popular storytelling, AI is often compared to third-party humans, and thus perception towards it is often determined based on differences between the behavior of strangers or of technology.
To face the emerging challenges and concerns, the majority of citizens – above all Americans and Europeans – would back regulation and oversight of emerging AI technologies, including chatbots and driverless vehicles.
Policy and regulating bodies are expected to intervene to regulate emerging AI technologies; while tech enthusiasts may voice for unrestrained opportunities, civil society would gladly welcome clear marks of AI generated content, legal liability of AI companies, restrictions on the export of advanced AI technology, and disclosure and transparency by AI companies about the used models and their training. On the contrary, research should be promoted, not only about technological advancement but also about developing transparency, responsibility and sustainability, with proper balance between staying at the frontier and aiming for sustainable development.
AI in the workplace: different views than for “civil” applications
Contrary to the nervousness recorded among consumers, AI is better perceived in workplaces. Despite the ongoing concern that it will take over many jobs and positions, most workers believe that AI will make their job better and more productive. Differences exist among countries: in UK and America, the high-skilled workers are more highly concerned, whereas office workers and operative workers are those thought to be more easily replaced according to Italians.
Among the generations, young people are those mostly expecting further disruptions in their working experience, believing that industries such as gaming, telecommunications and tech will be the most impacted. Science and research (especially biomedical research) are equally expected to experience major changes in the near future.
Nonetheless, there is overall optimism towards the future, with 62% of Europeans thinking that AI and automation will make their work better, e.g., by reducing repetitive tasks (albeit significant differences exist, from the >70% in the Scandinavian countries to the <50% in Portugal and Greece). Similar sentiment is recorded among Americans, too.
In general, workers having knowledge and mastery of AI are expected to have a competitive advantage both towards their peers, and towards AI itself. The major concern, on the other hand, is the loss of empathy and “human touch”, which could hamper relationships among peers and result in negative experiences in many cases, e.g., in doctor-patient interactions, or in job recruiting.
Within the workplace, workers have a high demand of AI regulation, to protect their privacy, involve them in technology design and adoption, and ensure transparency in human resources management. Regarding the last point, very few candidates would be happy to apply for positions where AI scrutiny is used to select applicants, while automated evaluation of performance and merit is sometimes regarded as beneficial against human biases.
Employees vs C-levels
The friction between employees and C-suites is strikingly relevant. Employees are often more ready to adopt AI than their leaders imagine, and would be happy to receive specialized training; however, less than half feels they are correctly being informed about the latest adoptions by their companies.
Leaders are expected to properly balance speed and safety; they are also expected to recognize their responsibility in driving the transformation.
However, many leaders report that employees’ readiness is a barrier to adopt AI, hence ignoring the true readiness level and often showing own issues towards leadership alignment.
Solving these tensions between employees’ readiness and positive sentiment, versus leaders’ own concerns and vision, is key to unleashing the technology full innovative potential and bring value and efficacy in the workplaces. Training, alignment of core values and long-term vision are the key elements to drive companies towards safe and sustainable AI adoption and to solve several frictions among older and younger generations.
In conclusion
Overall, the insights obtained by recent surveys show the pressing request of citizens for trustworthy AI: for instance, better awareness and information are requested to be accessible to the broad public – including older generations, which form a large consumer base and the majority of middle- and high-tier management. There exists a striking difference in AI acceptance depending on whether we speak of everyday-life applications, or adoption in the workplace: while the impact on the job market is perceived, on average, quite positively, the percolation of AI into everyday life should be backed by decisive actions towards ethics, respect of data and privacy, and transparency. Societal adoption needs building trust and empowerment, rather than tech-oriented marketing and storytelling.
While people would back policy measures for regulation and oversight, companies embedding ethical governance would gain trust among customers and improve their standing -not only in the B2C market, but also in B2B involving interactions with procurers that are less tech enthusiast: therein, transparency, awareness and a clear “responsibility value chain” would be strong assets. Also within workplaces, workers demand transparency about AI adoption and its applications and are often ready to kickstart innovation; on the other hand, C-suits are often more cautious, but would drastically benefit from exchanges with employees.
In general, as expected for a technology that is still in fieri, there are mixed feelings that differ across age groups, geography and cultural backgrounds.
However, a common denominator can be clearly recognized: people ask for clarity, transparency and shared ethical attitudes by AI developers and deployers.
Pursuing them may lead to shared sentiment and trust, therefore allowing for more sustainable technologies and broader and smoother acceptance in business and society. As a note of caution, we remark that all surveys were conducted before 2025, after which the American administration drastically changed its storytelling. If and how this shift and its consequences – such as the expected deregulation of basic and ethical oversight on AI development – is yet to be measured: would it trigger a drift towards more lax sentiment, or would it trigger a counter-rejection? And would that be limited to US, or will it percolate elsewhere? Definitely, playing proactively would help to meet widespread concerns and secure a position even in case of fluctuating opinions.
References
[1] McKinsey Digital, “Superagency in the workplace: Empowering people to unlock AI’s full potential”, Report 28 January 2025
[2] European Commission, “Commission survey shows most Europeans support use of artificial intelligence in the workplace”, Directorate-General for Employment, Social Affairs and Inclusion, 13 February 2025
[3] Eurobarometer, “Artificial Intelligence and the future of work”, Survey, 12 February 2025
[4] L. Rainie, C. Funk, M. Anderson and A. Tyson, “How Americans think about artificial intelligence”, Pew Research Center, 17 March 2022
[5] M. Faverio, A. Tyson, “What the data says about Americans’ views of artificial intelligence”, Pew Research Center, 21 November 2023
[6] M. Carmichael, J. Stinson, “The Ipsos AI Monitor 2024: Changing attitudes and feelings about AI and the future it will bring”, Ipsos, 6 June 2024
[7] J. Dupont, V. Ali, A. Price, S. Wride, D. Baron, “What does the public think about AI?”, PublicFirst, Center for Data Innovation, July 2024
[8] G. Galli, “Intelligenza Artificiale: cosa ne pensano le persone?”, Repertorio Salute, 15 September 2024
[9] E. Veratti, A. Lo Martire, “The Future of Work in Italy: AI at the Heart of a Generational and Industrial Revolution”, Brain and Company, December 2024
[10] YouTrend, “Gli italiani e l’intelligenza artificiale: cosa ne pensano, cosa si aspettano”, Fondazione Pensiero Solido, 19 May 2023
Daniele Proverbio holds a PhD in computational sciences for systems biomedicine and complex systems as well as a MBA from Collège des Ingénieurs. He is currently affiliated with the University of Trento and follows scientific and applied mutidisciplinary projects focused on complex systems and AI. Daniele is the co-author of Swarm Ethics™ with Katja Rausch. He is a science divulger and a life enthusiast. In our first article of two, we have challenged traditional normativity and the linear perspective of classical Western ethics. In particular, we have concluded that the traditional bipolar category of descriptive and prescriptive norms needs to be augmented by a third category, syngnostic norms.