Artificial Intelligence: In 2050

Artificial intelligence is transforming how we design and build. By 2050, the effects of AI adoption will be widely felt across all aspects of our daily lives. As the world faces a number of urgent and complex challenges, from the climate crisis to housing, AI has the potential to make the difference between a dystopian future and a livable one. By looking ahead, we’re taking stock of what’s happening, and in turn, imagining how AI can shape our lives for the better.

Courtesy of Bjarke Ingels Group (BIG)

Artificial intelligence is broadly defined as the theory and development of computer systems to perform tasks that normally require human intelligence. The term is often applied to a machine or system’s ability to reason, discover meaning, generalize, or learn from past experience. Today, AI already utilizes algorithms to suggest what we should see, read, and listen to, and these systems have extended to everyday tasks like suggested travel routes, autonomous flying, optimized farming, and warehousing and logistic supply chains. Even if we are not aware of it, we are already feeling the effects of AI adoption.

As Alex Hern explored at The Guardian, making predictions on the next 30 years is a mug’s game. However, the act of following trend-lines to possible conclusions and imagining how we might live is a productive exercise. We’re taking a closer look at how artificial intelligence will shape design by 2050. From air taxis and urban intelligence to construction and the Singularity, AI will continue to shape how we live, work and play.

The Future of Work

© Iwan Baan

According to The Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. These jobs will be lost as “artificial intelligence, robotics, nanotechnology and other socio-economic factors replace the need for human employees.” Editorial Data & Content Manager Nicolás Valencia explored this idea two years ago and how automation will affect architects. The conclusion is that the most difficult jobs to replace require a high level of creativity and human interaction, and have a low percentage of repetitive activities. These will be the last to be replaced, but there will also be new jobs created that will become necessary to monitor and coordinate intelligent machines and systems.

As we approach a time when the broad intelligence of AI exceeds human levels, existential questions arise. What should you study when any job can be programmed or replaced? Will universal income be adopted as a result? Microsoft co-founder Bill Gates believes so. “AI is just the latest in technologies that allow us to produce a lot more goods and services with less labor,” says Gates. How we work, and what we can work on, will begin to change at an increasingly faster rate. If half of all work can be done by robots or machines in the next 15 years, it’s likely that all work will be shaped by AI before 2050.

Urban Intelligence & Big Data

© Foster + Partners

AI and the “Internet of Things” are changing how we live, and in turn, society at large. Architect Bettina Zerza has explored how data and intelligent systems will dramatically shape our cities. She gives the example of micro sensors and urban technology that will record air quality, noise pollution and soundscapes, as well as urban infrastructure at large. How people move, where emissions are worst, and how efficient city processes are represent just a few of these ideas.

Today, 55% of the world’s population lives in urban areas, a number that will increase to 70% by 2050. Projections show that urbanization could add another 2.5 billion people to urban areas by 2050, with close to 90% of this increase taking place in Asia and Africa. Here, AI can further analyze and monitor how we move about the city, work together, and unwind. In 30 years, we will also have entirely new versions of these modalities.

Artificial intelligence will continue to inspire discussions on the precarity of work, our shared ethics, ideas like universal basic income and urban intelligence, as well as how we design. More than productivity gains, we can rethink the way we live and how we shape the built environment. By doing so, we can being to imagine new creative and social processes, and hopefully, work with AI to lay the foundation for a better future.

Conversations with Robots

In late 2016, Gartner predicted that 30 percent of web browsing sessions would be done without a screen by 2020. Earlier the same year, Comscore had predicted that half of all searches would be voice searches by 2020. Though there’s recent evidence to suggest that the 2020 picture may be more complicated than these broad-strokes projections imply, we’re already seeing the impact that voice search, artificial intelligence, and smart software agents like Alexa and Google Assistant are making on the way information is found and consumed on the web.
In addition to the indexing function that traditional search engines perform, smart agents and AI-powered search algorithms are now bringing into the mainstream two additional modes of accessing information: aggregation and inference. As a result, design efforts that focus on creating visually effective pages are no longer sufficient to ensure the integrity or accuracy of content published on the web. Rather, by focusing on providing access to information in a structured, systematic way that is legible to both humans and machines, content publishers can ensure that their content is both accessible and accurate in these new contexts, whether or not they’re producing chatbots or tapping into AI directly. In this article, we’ll look at the forms and impact of structured content, and we’ll close with a set of resources that can help you get started with a structured content approach to information design.
The role of structured content
In their recent book, Designing Connected Content, Carrie Hane and Mike Atherton define structured content as content that is “planned, developed, and connected outside an interface so that it’s ready for any interface.” A structured content design approach frames content resources—like articles, recipes, product descriptions, how-tos, profiles, etc.—not as pages to be found and read, but as packages composed of small chunks of content data that all relate to one another in meaningful ways.
In a structured content design process, the relationships between content chunks are explicitly defined and described. This makes both the content chunks and the relationships between them legible to algorithms. Algorithms can then interpret a content package as the “page” I’m looking for—or remix and adapt that same content to give me a list of instructions, the number of stars on a review, the amount of time left until an office closes, and any number of other concise answers to specific questions.
Structured content is already a mainstay of many types of information on the web. Recipe listings, for instance, have been based on structured content for years. When I search, for example, “bouillabaisse recipe” on Google, I’m provided with a standard list of links to recipes, as well as an overview of recipe steps, an image, and a set of tags describing one example recipe:

This “featured snippet” view is possible because the content publisher, allrecipes.com, has broken this recipe into the smallest meaningful chunks appropriate for this subject matter and audience, and then expressed information about those chunks and the relationships between them in a machine-readable way. In this example, allrecipes.com has used both semantic HTML and linked data to make this content not merely a page, but also legible, accessible data that can be accurately interpreted, adapted, and remixed by algorithms and smart agents. Let’s look at each of these elements in turn to see how they work together across indexing, aggregation, and inference contexts.
Software agent search and semantic HTML
Semantic HTML is markup that communicates information about the meaningful relationships between document elements, as opposed to simply describing how they should look on screen. Semantic elements such as heading tags and list tags, for instance, indicate that the text they enclose is a heading

HTML structured in this way is both presentational and semantic because people know what headings and lists look like and mean, and algorithms can recognize them as elements with defined, interpretable relationships.
HTML markup that focuses only on the presentational aspects of a “page” may look perfectly fine to a human reader but be completely illegible to an algorithm. Take, for example, the City of Boston website, redesigned a few years ago in collaboration with top-tier design and development partners. If I want to find information about how to pay a parking ticket, a link from the home page takes me directly to the “How to Pay a Parking Ticket” screen (scrolled to show detail):

As a human reading this page, I easily understand what my options are for paying: I can pay online, in person, by mail, or over the phone.

Voice queries and content inference
The increasing prevalence of voice as a mode of access to information makes providing structured, machine-intelligible content all the more important. Voice and smart software agents are not just freeing users from their keyboards, they’re changing user behavior. According to LSA Insider, there are several important differences between voice queries and typed queries. Voice queries tend to be:
longer;
more likely to ask who, what, and where;
more conversational;
and more specific.
In order to tailor results to these more specifically formulated queries, software agents have begun inferring intent and then using the linked data at their disposal to assemble a targeted, concise response. If I ask Google Assistant what time Dr. Ruhlman’s office closes, for instance, it responds, “Dr. Ruhlman’s office closes at 5 p.m.,” and displays this result:

These results are not only aggregated from disparate sources, but are interpreted and remixed to provide a customized response to my specific question. Getting directions, placing a phone call, and accessing Dr. Ruhlman’s profile page on swedish.org are all at the tips of my fingers.
When I ask Google Assistant what time Dr. Donion’s office closes, the result is not only less helpful but actually points me in the wrong direction. Instead of a targeted selection of focused actions to follow up on my query, I’m presented with the hours of operation and contact information for MultiCare Neuroscience Center.
Getting started: who and how
Design practices that build bridges between user needs and technology requirements to meet business goals are crucial to making this vision a reality. Information architects, content strategists, developers, and experience designers all have a role to play in designing and delivering effective structured content solutions.
Practitioners from across the design community have shared a wealth of resources in recent years on creating content systems that work for humans and algorithms alike. To learn more about implementing a structured content approach for your organization, these books and articles are a great place to start:


Content Everywhere, Sara Wachter-Boettcher
“Content Modelling: A Master Skill,” Rachel Lovinger
Content Strategy for Mobile, Karen McGrane
Designing Connected Content, Carrie Hane and Mike Atherton