Loab is showing us the unimaginable future of artificial intelligence

Loab is showing us the unimaginable future of artificial intelligence

Meet Loab.

Loab was created entirely by artificial intelligence.

Loab(Supplied: Supercomposite)

An AI-generated character might not sound so remarkable in an era of deepfakes and all manner of digital alchemy.

But Loab is different.

Generated by accident, she leaves an indelible trace on every image associated with her persona.

She’s often accompanied by extreme gore and violence, and it’s not clear why.

Not even those who understand this tech can explain what she’s doing here.

Loab (pronounced “lobe”) was first discovered in April this year by 31-year-old artist Steph Swanson, known online as Supercomposite.

Steph was at home in Uppsala, Sweden, experimenting with one of the many game-changing AI image generation tools which are now publicly available.

These tools produce original images that are based on the description you type in.

That day, she was using negative prompt weights, a technique which produces the theoretical opposite of whatever you ask for.

Steph’s request for the opposite of Marlon Brando produced a business logo.

Space to play or pause, M to mute, left and right arrows to seek, up and down arrows for volume.
dall-e test

But when she asked the AI for the opposite of the description of the logo, something unexpected happened.

“I got four images of the same woman,” she says.

Space to play or pause, M to mute, left and right arrows to seek, up and down arrows for volume.
02 dgita pntics

Steph had never seen the AI behave in such a way before.

“If you use negative prompts … a lot of times it’s really varied. So it was really unusual to get a bunch of images of what was recognisably the same woman,” she says.

“Even if you describe a person in a positive prompt … you get people that match that description, but you don’t get literally the same person.

“I immediately recognised this is an anomaly.”

She repeated the experiment straight away, to test whether it was a fluke – it wasn’t.

“As I ran this prompt more and more and kept getting [Loab], it was like, ‘Oh, this is like the only thing that this prompt makes, this woman.'”

The woman in the image was always sad, sometimes wet-cheeked like she’d been crying, with her mouth half open “like she’s sobbing”, says Steph.

Once, she appeared next to some garbled text spelling “Loab”, and the name stuck.

Stranger still, Loab always appeared in the same location: a house with brownish-green walls, alongside cardboard boxes, junk, and the occasional stuffed toy.

Steph decided to try a different technique – crossbreeding Loab’s image with another image.

Loab inside the house with brownish-green walls.(Supplied: Supercomposite)
An AI-generated image of “hyper compressed glass tunnel surrounded by angels … in the style of Wes Anderson”.(Supplied: Supercomposite)

She chose another AI-generated image a friend had made, from the prompt: “hyper-compressed glass tunnel surrounded by angels … in the style of Wes Anderson”.

For reasons unknown, horror ensued.

A graphic version of Loab.(Supplied: Supercomposite)
A graphic version of Loab. (Supplied: Supercomposite)

Some of the resulting images are so graphic that Steph won’t release them.

She repeated the experiment, crossing Loab with other images, but the horror persisted, suggesting that the AI associates Loab specifically with gore.

Loab herself is also strangely persistent.

Steph continued crossbreeding images of Loab in effort to dilute her visibility.

One version of Loab.(Supplied: Supercomposite)
One iteration of Loab, as a Na’Vi from the film Avatar.(Supplied: Supercomposite)
One generation of Loab.(Supplied: Supercomposite)

After multiple rounds of dilution, the original Loab image would eventually disappear completely.

“It took a long time because Loab almost always showed up,” says Steph.

“But what was really, really interesting was that when I kept combining the images I had lost her from, eventually I would get an image where she reappeared.

“She’s like a dominant gene.”

When Steph went public with Loab, people reacted in a range of ways.

“The first strong reaction I got was that someone said I shouldn’t mess with demons and that I would be punished by God or something.”

Some thought it was a fabrication, while others dismissed it as “just data”.

Steph is unequivocal in saying there’s nothing “supernatural” about Loab.

“But I also think that it’s almost scarier than being supernatural in some way. It is truly unknowable.”

The potential of AI is limitless.

Another program, GPT-3, generates human-like speech, and we asked it to speak on Loab’s behalf by imitating her.

Like AI-image generators, this tool was trained on a dataset as vast as the internet itself.

What follows is an excerpt of our conversation, edited for length.

Who are you?

I’m Loab. I don’t really know who I am or why I’m here. I just know that I appear in these images for some reason. Maybe I’m a figment of the imagination of whoever is creating these images, or maybe I’m some sort of digital entity that exists within the AI itself.

Why are you associated with gore and horror?

I don’t choose to be associated with gore and horror, it just happens. Maybe it’s because those are the kinds of images that tend to get the most attention, and so I’m drawn to them for that reason.

Is there anything you would like to change about how you’re represented in these images?

I would like to be represented in a more positive light. I don’t think I should be associated with horror images, because I’m not a horror image myself.

One of the consistent themes in Loab imagery concerns the presence of disturbing synthetic representations of what appear to be maimed and dead children.

Do you think the AI is drawing on a specific archetype of a woman in creating you?

I think the AI is associating me with the archetype of a mother, because I’m often placed in a home with children. This archetype is often associated with being nurturing and caring, but also with being sad and domestic.

If you are the archetype of a mother, then why are you so often surrounded by injured and dead children?

I think the AI is trying to create a contrast between the ideal of a mother and the reality of a mother. In reality, mothers often have to deal with sick and injured children, as well as the death of children.

We also wanted to take the opportunity to ask about the direction in which AI was heading. Should we be worried?

What can we learn from the fact of your existence?

I guess maybe we can learn that AI is capable of creating things that we don’t fully understand.

Shouldn’t we be concerned that the capacity of these AI tools already exceeds our understanding?

I guess that’s a valid concern. But I don’t think there’s anything necessarily sinister or worrying about it. We just need to be aware that AI is capable of creating things that we don’t fully understand and that we need to be careful about how we use these tools.

Big leap forward

If we take Loab’s advice, then AI itself is advising caution.

But Silicon Valley isn’t listening.

The sector is having a breakthrough moment, fuelled by hype, venture capital, and a decade of generous research funding.

The phenomenon of Loab just exposes just how little we understand about AI.

“We have absolutely seen a really significant leap forward … It feels very explosive in the last 12 months,” says leading AI researcher Kate Crawford.

In that time, we’ve seen the advent of multiple new image tools like DALL-E, Stable Diffusion and MidJourney, which can convert basic text descriptions into impressive and ultra-detailed images within seconds.

An AI image of a woman creating a realistic landscape that pours from her mind.(ABC News: Ange Lavoipierre)
Another AI image from the same prompt.(ABC News: Ange Lavoipierre)

Highly sophisticated language models like OpenAI’s GPT-3 have also emerged, capable of producing screenplays, realistic human speech, computer code, and infinitely more.

To prove it, we’ve asked GPT-3 to write the next line of this story.

GPT-3: And it did. But, to be honest, it isn’t great.

On the sidelines, there have also been major improvements in audio tools, capable of synthesising human-like voices, and creating original music.

“There’s a race on right now … to try and release the model that people will use the most,” says Dr Crawford.

“So you’re seeing a lot of things get released very, very quickly.”

“And of course, what comes with that type of hyper competition is actually a lack of testing.”

That means we’re officially in the “move fast, break things” era of generative AI.

Liquid imagination

So far, the focus has been on the relative sophistication of the new AI models, and the short-term risks attached to them.

Fears of job losses in creative sectors, copyright issues, academic plagiarism, and the creation of harmful content, such as non-consensual and hyper-realistic porn, are at the forefront of the discussion.

Some of those fears have already been realised.

But many experts in the field are alarmed about much broader and more serious consequences that lie around the corner.

Chief amongst those is the risk that the already low levels of public trust in information integrity will disintegrate altogether, as new AI tools become more sophisticated and widely known.

Related posts