Facebook has released research into a new technique called “radioactive data.” The technique is analogous to the use of radioactive markers in medicine.
Once you’ve figured out the problem you want to solve and that AI is the right way to go, the next step is to figure out how to apply AI. It’s important to understand whether AI should replace what a human does (which can reduce time spent on monotonous tasks and free people up) or whether AI should enhance what a person does by making it easier to do a task
AI is “cool and interesting” for engineers and developers but those who try to learn how to build AI can become frustrated and suffer from imposter syndrome according to new research.
Understanding the science of emotions is important for AI designers. Failing to understand the full complexity, nuance and variability of emotional expression risks both bias and avoidance. People will easily be able to outsmart an AI by faking expressions.
The world of natural language AI has changed significantly in a relatively short space of time – the shift to transformer architectures. Beyond this shift, there appears to be a bifurcation in approach.
Natural language is one of the most important challenges in AI. And things have moved rapidly over the past few years. This primer will help you get to grip with you the basic concepts in a (mostly) non-technical way.
When there’s too much information, reducing the amount of information – simplifying the data – can reveal more. This idea is foundational to understanding how new architectures are being used in natural language processing.
As anyone knows who uses an AI voice assistant, such Alexa, Siri, Cortana or OK Google, the wake word isn’t the only thing that triggers a response. In the case of Amazon’s Echo, the phrase “my pants on” has a high chance of waking the device, risking accidentally recording the conversation and sharing it with Amazon.
New research reveals a basic power imbalance that is not easily remedied: given the informational position of the designer, there is simply no way to fully maintain commitment to a user’s autonomy.
Personalization is a core promise of AI but there is a core tension at play – how to personalize without reducing people’s ability to decide for themselves.
In a techno-social economy, as more human problems are subject to computation, human-centered AI will be AI that ultimately keeps humans in control.
Explainability and transparency are critical performance criteria for AI systems. Bias and fairness are increasingly top-of-mind, which raises the stakes on AI developers to be able to interrogate and understand their models. New research raises concerns about how these tools are being used in practice. Researchers find failures with the use of explainability tools.
New research claims to have developed AI that’s capable of detecting deception by looking at gait and gesture. This is potentially a goldmine for the AI surveillance community. But we can also see the danger. Specifically, how the seeds of bad AI can happen very early on in the development.
According to Tristan Harris, the director and a co-founder of the Center for Humane Technology, technology (and by association, AI) is operating under the wrong paradigm.
AI adoption is increasing but AI-at-scale is not. These six must-do steps help to close the gap.
Facebook AI research has released a new dataset and benchmarks for training emotionally-aware chat systems.
It’s that time of year again and McKinsey has updated their global survey on AI adoption and practice. This survey is valuable because it focuses on the relative competitive position of companies using AI. There are two types of company now – those that lead in AI and those that lag.
If you’re a native English speaker, you may not have consciously realized that masculine and feminine pronouns are not grammatically the same as each other. This matters because AI – specifically large, open, shared AI language models – have an important, inherent gender bias simply because of a quirk in the English language. In a
Apple and Goldman Sachs have been tweet stormed this week. Now Goldman is under investigation by the NYDFS and Apple’s brand is tarnished. None of this needed to happen. Here’s our initial conclusions and questions all leaders should ask of their AI product teams and data scientists.
Here I’ve published my workshop opener from the House of Beautiful Business’s fabulous 2019 event in Lisbon. We have a problem in AI – on one hand we have engineers, AI scientists and researchers who understand and drive the technology. This gives us a techno-centric view of the solutions. They wrestle with such things as
Apple may have inadvertently exposed sexism in the credit card business. Even the best companies in the world with the best experience with AI will stumble as Apple and Goldman Sachs are now. The question for all is: how will you create an AI governance program to manage your machine employees?
Neuroscience and artificial intelligence have multiple intersection points. There are many examples of discoveries in neuroscience and psychology which provide inspiration for AI researchers, as well as the other way around. It’s a mutually reinforcing cycle as AI researchers adapt biological structures to machine architectures while neuroscientists and psychologists learn to apply various AI techniques
A dark humor interactive video designed to capture (literally) your attention and showcase how popular social media apps can use facial emotion recognition technology to make decisions about your life, promote inequalities, and even destabilize democracy made its debut this week. It’s particular targeted at teens and young people who use Snapchat and Instagram.
A fascinating new report details just how expansive the use of artificial intelligence surveillance is around the world. In a report released on 17 September, the Carnegie Endowment for International Peace outlines the global expansion of AI surveillance, the vast majority of it being rolled out since 2017. According to the report, “startling developments keep
It feels like the start of a tech backlash. Or at least, a mid-course correction. A backlash against design that manipulates our predictable cognitive weaknesses, disrupts our attention spans and creates new forms of psychological suffering. A backlash against the constant gaming of our mental models of technology. A backlash against predictive algorithms that infer
OpenAI – the AI research group, originally founded by Elon Musk and Sam Altman and now heavily backed by Microsoft – has released some truly groundbreaking AI using reinforcement learning techniques in a multi-agent game of hide and seek. Games showcase important theoretical advances in AI, but putting too much weight on how an AI
Trump’s trade war has overshadowed the other battle – supremacy in AI. After China’s State Council released a national strategy for AI development in July 2017, there was a race to understand what this would mean. Both China and AI are difficult enough to comprehend on their own. Many interpretations made at that time have
There’s a common view that we can infer how someone feels from how they look. AI that automates this and then applies the knowledge to tasks is growing in popularity. But now we know that use of the technology is way ahead of the science.
AI strategy starts with getting people empowered, confident and creative about AI: so they can see the opportunity for themselves and start innovating in ways they haven’t before. Our in-person facilitation and Opportunity Sprint does this, but now you can do it yourself.
Are you a facilitator or business leader who needs a practical way to develop, or refresh, an innovation, marketing or operations strategy? It’s time to take a focused look at AI and make your next executive offsite an AI innovation workshop.
Many companies implementing AI have done so with rapid adoption of sophisticated technologies. This has lead to technologists – data scientists, AI researchers and engineers – being far ahead of the rest of the business. Leaders are having to answer the question “is your AI ethical?”
It’s important to understand whether AI should replace what a human does (which can reduce time spent on monotonous tasks and free people up) or whether AI should enhance what a person does by making it easier to do a task, making task completion more powerful or add to the skills of the individual.
Facial recognition technology need not be a surveillance and privacy disaster. But it’s been designed and deployed in ways that do not increase trust and address fundamental human desire for choice and control.
OpenAI’s initial non-profit status provided some optimism that its technologies would be broadly beneficial for society. But the organization’s transition to a for profit entity and Altman’s new role as CEO exposes questions about the company’s governance.
The biggest thing slowing down successful AI deployments is people: the absence of skills and employee resistance.
Last week, Yann LeCun, the chief AI scientist at Facebook and a professor at NYU, held a webinar lecture on the future of deep learning. LeCun is one of the world’s uber AI researchers. He is, quite literally, designing the future of intelligent machines, and because he is at Facebook, intelligent social machines. At the
AI is incredible technology, something that would have been the stuff of science fiction only a few years ago. The problem is science fiction has also provided people with plenty of images of an AI surveillance dystopia. It’s hard to convey exactly how these systems work if designers only talk in probabilities.
AI is hot. 88% of senior leaders agree that AI will help their business be more competitive. But, 58% of businesses say that less than 10% of their company’s digital budget goes towards AI. So, it’s not surprising that only 15% of AI projects succeed. This disconnect comes as no surprise to us. I was
Here’s a video of Helen talking about augmentation vs automation, what humans are good at vs what machines are good at and the templates we’ve developed to help you work through these topics. Click on the image below to download the template. Try it out and let us know what you think at email@example.com.
The most advanced companies today understand the scale of AI’s potential impact. They aren’t ignoring AI and they aren’t just following the herd and with a simple, linear solution. Instead, they are evaluating their business problems from new perspectives.
The retail industry, while collecting a ton of customer segmentation data on websites, is lagging in deploying this data to personalize a feed for a customer. This matters because the ability to magically show a customer something that they love increases loyalty. Only a quarter of specialty retailers actually deploy the data they collect to personalize the customer experience.
We read, listen, watch and discuss with many. It’s simply impossible to include every person, podcast, video, book, paper, article we’ve touched, but we would like to acknowledge some specific expertise as well as include reading follow up for those who want to go deeper.
Designing for feedback and co-learning between human and machine is complex, experimental and counter-intuitive. In the case of nudging, consumers – and likely employees too – don’t take long to get wise to it.
In the voice wars, context will be king. But computational efficiency matters — especially for Google which wants to be able to link the online and offline worlds, can’t position for privacy like Apple’s Siri or be always on wifi like Amazon’s Alexa devices. A true consumer, mobile, cross-platform, ad-driven experience will require both approaches to be equal priority.
In the world of AI, context and intent are very difficult for machines to learn. What Pinterest has achieved is impressive because they are unique in their ability to serve up a personalized and contextually relevant visual experience, at scale and at speed.
AI will represent a fundamental challenge to anti-discrimination regimes that seek to limit discrimination. Because AI uses training data to look for correlations that are predictive of an output with no theory or intuition from a human, it will naturally seek out proxies for genuinely predictive characteristics.
Core to human-centered AI is explainability. If a machine cannot explain its reasoning in a way that humans understand and on human terms, the AI isn’t working for people. Researchers from Georgia Institute of Technology, Cornell University and the University of Kentucky recently published the results of teaching a machine to generate conversational explanations of
The headlines still talk about AI job Armageddon; widespread job loss through AI and automation because AI is faster and better at everything humans can do. But emerging evidence tempers concerns about machines replacing professional work. Commercial AI applications offered by startups are being bought for enhancing human capabilities rather than replacing humans, according to
While it may be tempting to design such robots for optimal productivity, engineers and managers need to take into consideration how the robots’ performance may affect the human workers’ effort and attitudes toward the robot and even toward themselves.
Human-centered AI design is different. To fully appreciate how speed, scale and scope of AI materially alter the standard design process, we have tailored our Sprints for AI development needs, to help technical and non-technical people come together, so that they can make better AI-enabled products, faster.
For AI designers, there is one cognitive bias that is particularly important; it’s called the representative heuristic.
The most advanced companies today understand the scale of AI’s potential impact. They are evaluating their business problems from new perspectives. They are using the inspiration of how others are solving problems with AI to find new problems. They understand that AI’s solutions allow them to address problems they couldn’t before.
The traditional view of automation and labor is that automation increases the value of labor by increasing the productivity of a chain of tasks. Now that more machine learning-based AI has been deployed in more places, what’s really happening is more nuanced.
Human-centered AI design makes it possible to bring in deciders—HR professionals and managers—so that they can experiment and tune the system as often as required, ultimately always having control over configuring the AI to find people in ways that truly reflect the diversity of human performance but still offering the efficiencies required.
For product managers and innovators, there are many new things to consider when designing relationships with customers and AI.
Artificial intelligence is one of the most important technologies of our time. AI is everywhere – it is a technology that is diffusing through everything and it touches our lives every day. This is because we are increasingly governed by our digital selves and AI powers the digital world. We have spent years researching AI.
Human-centered AI design goes beyond user interface design; it takes account of the broader implications of AI, including accountability for mistakes, ethics, bias and governance. It considers an AI to be an active agent with a distinct intelligence that it’s the responsibility of humans to design.
So far there have been two major eras in the history of computing. The first was the era of the PC from mid 70s to mid 90s. This was the heyday of IBM and then of Microsoft as they started to take over the world. This was the beginning of the desktop world with computers