How is My Computer Talking to Me? An overview of Natural Language Processing

If you have ever used tools like Alexa or ChatGPT, you may have wondered how your computer is able to understand what you are saying.

In recent years computers have become so good at understanding human language that they can now carry conversations in ways that feel strikingly familiar. To many, this ability to understand and generate language may signal that computers are now on the verge of gaining sentience. This is not the case, however, the question remains: How does it all work under the hood?

Three words: Natural Language Processing.

What is Natural Language Processing?

Computers having the ability to process and understand human language is incredibly useful. As humans, language is the primary way we connect and engage with each other and the wider world. Since the days of early computing, people have tried to figure out how to give computers the power to communicate with us using language.

Natural Language Processing (NLP) is a field of computer science that focuses on getting computers to understand, process, and generate human language. It draws on areas such as linguistics and machine learning. The role of NLP is to translate human language into a structure that can be understood and processed by computers.

This is no easy feat! Human language is complex, and the meanings of words can change based on things like context or tone of voice. There are also aspects of language like idioms, metaphors, or incorrect usage that humans can intuitively process that are more difficult for computers to contextualise.

Early NLP systems relied on sets of handwritten rules to process language. These programs were able to parse text, identify basic sentence structures, and provide preprogrammed responses to respond to common things users might say. While these early systems could handle basic language processing tasks, they failed when the language they had to process became more complex.

In recent years, the rise of machine learning has led to major advances in the field of NLP. Machine learning has enabled NLP models to process larger amounts of data and identify rules and patterns in language more efficiently. These advancements have seen us progress from the early days where NLP could only process and understand simple sentence structures, to now where NLP is used to write news articles, poetry, and school assignments to the same standard as those written by human beings.

What can NLP be used for?

At a high level, the field of NLP is concerned with how to give computers the ability to understand and generate human language.

This leads to two key focus areas of the NLP space:

Natural Language Understanding (NLU)

This area of NLP focuses on the ability of computers to analyse and extract meaning from speech or text.

Natural Language Generation (NLG)

This area focuses on the ability of computers to produce human language i.e., generating text and generating speech.

These two areas NLP has a range of useful applications. While tools like ChatGPT have highlighted the advancements the NLP space has made in the realm of text generation, NLP is also used to perform other tasks such as:

  • Sentiment analysis (identifying the emotional tone behind a body of text)

  • Content analysis (analysing large bodies of text to summarize or extract high level themes)

  • Text categorization

  • Spam detection

  • Speech to text/speech recognition

  • Chat bots/digital assistants

  • Language translation

How does it work?

It can be difficult to conceptualise how the messiness and nuances of human language can be codified into a form that is interpretable to computers. Figuring out the best way to tackle this task has been the focus of the NLP field since its inception.

Today’s NLP systems are far more sophisticated and complex than ever before. This is in large part due to the rise of machine learning and the increased volume of text data from the internet. Many modern NLP systems are powered by Large Language Models (LLMs) which are neural networks that are trained on large volumes of text data. These models learn information about the meanings and relationships of words based on how they appear in the training data. LLMs store this information as data points associated to individual words. The models can use these data points to calculate the likelihood certain words will appear together in a sequence or understand how similar any two given words are to each other.

For instance, let’s say we wanted to use an LLM to help predict the next word in this sentence:

“Yesterday I went to the park and walked my ______.”

The model would be able to predict that the next word is more likely to be “dog” than another word like “chair”. This prediction would be based on what it learned from the training data and the kinds of words that appeared in similar contexts to the sentence above.

The volume of data that is used to train LLMs is what makes them so powerful.

GPT-3 – the LLM that powers ChatGPT - was trained using a data set of 300 billion tokens (sequences of characters i.e. words) while the model itself consists of over 175 billion parameters (at a high level these can be thought of as the data points used to store information about the meanings and relationships between words).

Through analysing such large collections of text, LLMs learn so much about the complexities and patterns of human language. This allows them to process, understand and generate language with an unprecedented accuracy.

Considerations

NLP has many powerful applications and has already had huge impacts on a range of fields including healthcare, law, finance, and customer service. Over the coming years, the use and sophistication of NLP tools appears set to increase exponentially. As these tools become a bigger part of our lives and society, understanding how they work can help us engage with these tools intelligently and ethically.

Data is a crucial component of NLP tools. NLP tools rely on data to understand what meanings and relationships exist between different words, topics, and concepts. Language reflects so much about the values we hold as a society and how we view things like gender, race, class, etc. Therefore, it is important to think critically about how the values, ideas and biases that exist in our language will be reproduced by NLP tools that are trained on language data. This is especially relevant for tools that use models like GPT-3 which collect training data from the internet. For instance, given that the bulk of the data comes from America/Europe, their values may unknowingly be imposed on kiwis.

In an Aotearoa context, something else to consider is the development of NLP tools that can process te reo Māori. As we begin to integrate NLP tools into the infrastructure of our businesses, schools, and government services, it is important to think about how these tools support the use of te reo Māori. Things to consider include how investing in the Māori NLP space could ensure that te reo Māori thrives into the future and how to balance that with ethical considerations around Māori Data Sovereignty and the collection of Māori data.

At Nicholson Consulting, we consider these questions when we use NLP tools and techniques in our practice. It’s an awesome tool, and when used with care and consideration, it can offer so much value to people and communities. In the future, we'll be publishing a case study about a project we worked on that employed NLP processes, and how we approached this in ways that were both safe and effective.

Until, then, if you would like to learn more about NLP and our mahi, please reach out to us by emailing hello@nicholsonconsulting.co.nz

Previous
Previous

Q&A: The Risks, Challenges and Opportunities of AI for Māori in Aotearoa

Next
Next

Weaving Māori Data Expertise Into the Government Data System Design