Information Architecture in 2018

A brief history

Information Architecture (IA) originated from library and information sciences - it’s the study of how information is created, managed and organised.

In the 1960s and early 70s, rumblings of the terms information and architecture came together with IBM and Xerox Labs writing lengthy reports on the topic. In 1976, Richard Saul Wurman coined the term and saw its key facets as gathering, organising and presenting information. In the mid 90s, Louis Rosenfeld and Peter Morville popularised the term in their book IA for the world wide web (otherwise known as the polar bear book) and IA became somewhat mainstream as it presented core library and information sciences (LIS) concepts as fundamental components of IA. It wasn’t until the late 1990s and early 2000s that the practice of IA was associated with designing websites.

The Information Architecture Institute defines it as: “IA is about helping people understand their surroundings and find what they’re looking for, in the real world as well as online”.

IA Today

90 percent of the data in the world today has been created in the last two years alone. To illustrate how vast this is, there are 2,276,867 Google searches per minute and 86,805 hours of video watched on Netflix per minute. With so much information out there being made and consumed, there has never been a more vital time for information architects to make sense of the information.

The role of an IA has changed over time, with its popularity decreasing and the broader title of user experience (UX) increasing. Whilst still being a specialist role, it is more frequently seen as part of a broad set of skills that are used by product and UX designers.

However, understanding IA is imperative for anyone designing products that people interact with in order to achieve a goal or task or digest some kind of information because at its core, it’s all about how people flow through an information space in search of something they need.  So, regardless of discipline or department, IA should be an important part in a number of different roles.

We’re still building and optimising websites to help people perform tasks, meet needs and create value for businesses. But, there are new technologies that we need to start understanding, both in terms of how they work and how people use them in order to understand how we design for people to find and consume information using these new mediums. I’m going to look at two technological advancements that are going to impact the way in which we access and consume information in 2018 and that provide interesting challenges for designers, UX designers and Information Architects.


Number one: Artificial Intelligence (AI)

AI is, according to the Oxford Dictionary: “the theory and development of computer systems able to perform tasks normally requiring human intelligence”. As we know, AI can understand speech and make decisions but there are three ways to categorise this intelligence:

  1. Artificial Narrow Intelligence: dedicated to assist or take over a specific task. Its intelligence can’t generalise and can’t apply what it ‘knows’ to new categories of problems e.g. Roomba Home Vacuuming Robot
  2. Artificial General Intelligence: the ability to apply intelligence to any problem rather than just one specific problem. It’s similar to a human intelligence, including learning, planning and decision-making under uncertainty. This is forecast to happen between 23 and 50 years but some say it never will
  3. Artificial SUPER Intelligence: the capability of a system that surpasses humans. It can reprogram itself, improve and reproduce itself. It was a level of self-awareness and a conscience.

We haven’t progressed that far yet, we’re at phase one, so let’s look a bit closer at what challenges an Information Architect might face. Take Spotify, with over 30 million songs on the platform. They will have faced a huge IA and UX challenge of how they help users find new songs and artists that they may actually like. Their solution was to develop a learning algorithm that makes recommendations based on your listening habits (applying the 'people who like that, also like this' logic) and creates your own mixtape of songs you may like and called it Discover Weekly. It’s clearly a huge success, just search #discoverweekly on Twitter.

IA pic 1.png
IA pic2.png

Another example of AI thats crept into our lives is how Google can now scan your messages, then uses information that it understands to suggest ways that you could reply to any given message. It’s great for quick, straightforward replies and can save time but it’s not overly intelligent yet.

The challenge here is making this useful for users and ensuring that the suggestions are relevant, in the correct tone and understand the context of the message.

Looking at the deeper challenges for IA’s working with AI, we come across unethical application and trust. Suppose there’s a bus coming toward a driver who has to swerve to avoid being hit and seriously injured; however, the car will hit a baby if it swerves left, and an elderly person if it swerves right - what should the autonomous car do?

As AI gets more sophisticated and ingrained into our lives, we must trust that it reflects our human values, and makes those kinds of decisions in its context. This might seem like an extreme example but someone has to consider this.


Number two: Voice as Input

Generally, there are two ways of implementation for voice as input - voice first devices which has standalone hardware (e.g Amazon’s Alexa) and then screen first devices which has integrated software (e.g Siri).

Voice interfaces provide some great opportunities for designers, IA’s and users. It gives control back to the user by allowing them to use their own language as the input, rather than having to learn how to use a physical interface or having to touch a piece of glass in order to complete a task. It also can enable multitasking as there is no physical device to hold.


However, it does pose some challenges:

  • What can I do? - You can’t create visual affordances which is part of the core foundation of usability. It’s not clear what actions you can perform as there is nothing visually telling you what to do
  • Computer says no - voice interfaces have a tricky time understanding the nuances of conversation and how words have multiple meanings. They also can’t understand context and can result in a frustrating experience for the user
IA blog 4.png

But, these two types of voice interfaces are converging which means you get the benefits of natural/human voice interaction with the benefits of visual affordances that screens bring.

Great benefits of usability include:

  • A reduction in cognitive load on the user's part. Don't have to remember the specific actions that they would have with a voice only interface
  • Efficiently conveying system status due to the screen being able to communicate that with a GUI
  • Providing visual signifiers of possible actions on the screen


So, what does this all mean for the Information Architect?

Number one: inclusivity

Much like the Facebook Alt text example, AI developments drastically improve their experience on the web. IA’s can use these developments to create products and experiences that are more accessible to everyone. This is especially important when you consider that the UK has an ageing population. A question to ask yourself is: how do we as designers and IA ensure that our content is findable and usable by everyone, regardless of their ability and regardless of the device their browsing on.

Number two: voice is the new touch

In 2016, mobile browsing overtook desktop browsing. Smaller screens provided great design challenges of fitting lots of content on devices with limited space. However, with voice interfaces becoming more popular in consumer life, the next challenges we’ll face in 2018 is how we design voice interactions and conversations to facilitate the needs of users to complete tasks, find information and solve problems on a more intelligent level.

Number three: design the new affordances

We’ve been designing physical affordances for a long time now, we’re generally pretty good at it (except doors, we seem to still suck at designing doors). We’ve relied on creating visual cues to indicate what you can / can’t do with a product. Voice interactions will require us to think beyond this. The challenge will be designing a product with no interface and still making it as usable and inclusive so that it's clear what it does and how to use it.