Digital Isolation

If you’re muted in an online video game, does that violate your Civil Rights?

At first I thought, duh, no, recalling all of my experiences dealing with internet trolls or players clearly violating a user agreement to cause chaos. Typically, if you’re rude or doing something that you aren’t supposed to do in an MMO and get kicked out, you deserve it. By using a privately owned platform, you’re consenting to their rules and regulations, and therefore agreeing to their policies as to what would warrant removal from a site. (8-year olds cursing on Club Penguin comes to mind.)

A recent Kotaku article details a situation in which Runescape player and streamer Amro Elansari sued Jagex, the game’s developer, after the company permanently muted him in the game in 2019. Elansari claimed this was a “violation of due process,” “discrimination,” and attacked his “free speech.” Elansari’s case failed in court, of course, and was not deemed a violation of Civil Rights. He had little ground to base his case off of, especially after accepting the Runescape user agreement prompted to every player upon signup.

While Elansari’s situation has minimal repercussions, it made me wonder how this situation would play out in a future heavily dominated by VR, AR, and other integrated tech. I can’t help but consider the isolating effects of Black Mirror’s episode White Christmas in which a character is essentially “blocked” from society. Due to a neural chip, the blocked man appears to everyone as a blurred out, static-y silhouette, and to the man, everyone else appears as the same silhouette with all conversations muted. By our standards, permanent isolation is a highly unethical, cruel punishment that would certainly violate our Civil Rights.

image.png

As technology continues to be seamlessly integrated into our everyday experiences, I believe that situations like Elansari’s could quickly sneak up on us and begin happening at a larger scale. Sure, it’s a bummer to be kicked off of Twitter or Instagram, but what if in the not-so-distant future, a large tech company facilitated all of our interactions? (Similar to Ready Player One, perhaps.) To be blocked or removed from a platform like that would be detrimental to one’s social health and wellbeing.

a79e592ff5c91fda2e6d435b89451934Our current MMO platforms and social media sites have donned government-like rules, as they are the kings of their small slice of digital reality, but have failed to implement strong and consistent government-like oversight to properly deal with situations like these that could extend into the future with heavier ramifications.

For now, Elansari’s situation and others similar to his may elicit an eye roll or a “he deserved it.” Yet, with the current speed and direction technology is headed in, and the ways in which it is melding with and changing our society and interactions, we may have to revisit our user agreements*.

“Once you have an augmented reality display, you don’t need any other form of display. Your smart phone does not need a screen. You don’t need a tablet. You don’t need a TV. You just take the screen with you on your glasses wherever you go.” -Tim Sweeney


*Speaking of user agreements…
Check out another post dedicated solely to the dark practices of user agreements!

Learn about a fun terms of service Google Chrome extension here.

Computer Choir

In August of 2013, the Mars rover Curiosity sang happy birthday to itself. It’s singing (actually, vibrating) had no scientific value, but rather demonstrates a very humanistic nature.

Why do we like to create technology that can sing?

Screen Shot 2019-12-10 at 7.30.35 PM.pngIn 1961, IBM created the first computer, the IBM 7094, to sing using computer speech synthesis. The song of choice? Daisy Bell. (You can listen to it here.) The popular 2001, A Space Odyssey references this moment when Hal, an AI character, sings Daisy Bell while shutting down. We really like our singing computers.

Today, Microsoft’s Cortana will sing you a song if you ask. You can ask Google Home for a serenade. Apple’s Siri will gladly sing to you. Amazon’s Alexa can sing five different original scores, such as this fancy number:

“Technology, technology,
where would I be without
tech-no-lo-gyyyyyyyyyy?
Without the Wi-Fi I couldn’t say hi,
as for music, I couldn’t choose it.
Shopping lists would cease to exist,
and time would be on your wrist.
I thank my lucky stars that I’m here today,
I hope that you’ll agree.
Give me one, two, three shouts of love,
for tech-tech, tech-tech, technologyyyyyyyyy.
Wooooo-hoooooooo, technologyyyyyyyyyy.”

What is the purpose of asking our devices to sing, typically with less-than-great vocals, when they are purposefully programmed to play music from our streaming apps and libraries?

Voice-Assistant-Placeholder-Siri-Alexa.jpgPerhaps people determine that the ability to sing marks intelligence in their machines. Perhaps when we can ask our machines any question imaginable, ‘can you sing me a song?’ is one of the first questions that comes to mind. Or maybe the ability to sing is a sign of status, giving a user or creator the ability to say that their machine can sing a song while another’s cannot.

As we continue to develop artificial intelligence, we continue to add human biases and desires into our designs, whether it be conscious or not. Music is an important bridge between people, as we constantly share and create songs as ways to relate to others. When introducing new technology, such as voice assistants, adding in humorous, human-esque qualities allows for tentative consumers to warm up to new products. Singing is an integral part in humanizing technology, even if that technology is millions of miles away on Mars, because that’s something that human astronauts would do if they were on the red planet for a year.

“I’m half crazy, hopeful in love with you / It won’t be a stylish marriage / I can’t afford the carriage / But you look sweet upon the street / On a bicycle built for two!” -Blur, Daisy Bell

Engineering Fun: AI Style

Orson Scott Card’s Ender’s Game (1985) describes a military-grade simulation based on artificial intelligence called “The Mind Game” in which the novel’s main character must train with to prepare for an alien attack. “The Mind Game” responds to the psychological and emotional states of its players, adjusting simulations and evolving over time to make the game more challenging.

Aside from the dystopian military aspect of “The Mind Game,” the machine learning that Card created in his story may not be all that different from where AI and video game development are headed today.

Bot Birds

Take Angry Birds, for example, the 2009 smash hit that reached #1 on the app store 311 times in the paid app category, at one point for 80 consecutive days, performing far better than any other paid app (source). Since its original launch, numerous spin off games as well as multiples movies and TV series were created based off of the original Angry Birds.

Rovio Entertainment, the game studio behind Angry Birds, is working to come up with new ways to keep fanatic Angry Birds fans engaged. Their latest app, Angry Birds Dream Blastrequires Rovio’s development team to crank out 40 new levels every single week to build new content for their fans, which is an outrageous amount of work that is near to impossible to maintain in the long run.

Enter stage left: artificial intelligence, machine learning, and deep neural networks.

The “Fun Factor” Formula

6c41a859106859.5afc6638baa69.jpgRovio has begun to introduce a combination AI to help build and test their latest Angry Birds game. Game design is a delicate balance; make a game too easy, and consumers will complain that it’s too similar to previous games or not challenging enough. Make a game too difficult, and people will lose attention and quit. In a game where the mechanics are quite simple to grasp (you’re launching birds with the tap of a finger to hit and kill things), as most mobile games are, it’s difficult to solve the “fun factor” formula.

This AI combo is a makeup of two bots working in tandem. One tests and applies reinforcement learning, measuring how “beatable” the level is, while the other tests heuristic factors, measuring how fun and playable a level is.

With the use of the AI as measurement tools comes the creation and storage of massive amounts of data, and with mass data comes the potential to sell said data. Rovio currently stores their players’ data on Amazon Web Services (AWS). Rovio’s bots can run thousands of level checks in just a couple hours, saving immense amounts of time for the humans, moving towards a model where the AI simply tests and adjusts the levels on its own. No human intervention necessary.

84e4ac655965e88e0a55480015fa4739Will Wright, a renowned American video game designer, predicted in 2005 at a Game Developers Conference that “a game development company that could replace some of the artists and designers with algorithms would have a competitive advantage” (source).

Angry Birds and other mobile games are just the start; AI similar to that in Ender’s Game could soon be generating content, creating maps and levels, and inventing new games from the ground up. Theoretically, robots would be building all of our games for us.

Will consumers be able to tell the difference between machine-generated and human-generated gaming content? Will the industry boom, or suffer, as a result? Should we embrace our robo-developers, stick with human devs, or find ground inbetween the two?

“Man’s relationship with technology is complex. We always invent technology, but then technology comes back and reinvents us” -Atul Jalan

Algorithmic Sublime

Meet Liam Nikuro, a Japanese-American living in California. Liam works in music, fashion, and entertainment, and he loves 2Pac and Justin Bieber. Here’s a picture of Liam below. But Liam isn’t real, he’s completely computer-generated. A CG human who is running an Instagram account with over 11,000 followers.

Liam’s parents/creators work for 1sec, a “virtual human planning and production” multimedia company. In the last year or so, there has been a boom in CG human influencers for large brands, and the trend isn’t slowing. Reportedly, AI and CG human retail spending will reach $12 billion worldwide by 2023, $3.6 billion of which will be met by the end of this year.

Michelle Steinberg, president of the influencer marketing firm Domain, offers this: “I don’t see why these influencers couldn’t be CGI-generated as long as consumers are able to relate to them, there’s follower retention and the public doesn’t feel as though they are being deceived.”

Earlier yet to the computer-generated social media influencer scene is Lil Miquela. Like Liam, she’s also involved in the world of music and fashion. Lil Miquela has released songs on Spotify, has starred in a controversial Calvin Klein ad campaign, participated in a recent Prada fashion show in Milan, and is making further deals with Chanel and Fendi. She openly admits to her 1.6 million Instagram followers that she’s a robot, both in her bio as well as on individual posts. Yet, it only seems to add to her fanfare. Lil Miquela was created by the startup Brud, a transmedia studio with a Google Doc for a website.

Out and Proud Robots

These virtual influencers are a benefit to large fashion and entertainment companies in many ways; they require no paycheck and they’ll never be involved in a scandal that could negatively impact the brand(s) they’re advocating for. These “people” aren’t physical, they are computer motion graphics with a team of extremely clever copywriters that teeter somewhere between reality and the uncanny valley. Social media, primarily Instagram, is full of “robot” influencers bought or commissioned by companies. They come equipped with a full personality; likes and dislikes, hopes and aspirations, as well as a conventionally attractive, nearly impossible to obtain face and physique.

IMG_9948Perhaps you’re hesitant to follow a robot who exists solely to share images full of product placement. But many other people aren’t deterred. A majority of followers flock towards these virtual influencers with an interest in the fact that these influencers are not human. (Perhaps it’s because the coming of age generation grew up watching Hannah Montana and we’re all accepting of fake celebrity personas.) These Instagram influencers were not the beginning of made-up digital content, however; the musical group Gorillaz was founded in 1998 as a virtual band, and won their first Grammy over a decade ago. In 2016, Louis Vuitton hired a character from Final Fantasy for an advertising campaign. Last month, Miley Cyrus adopted yet another persona, Ashley O, a character from a Black Mirror episode, releasing a single that became the #1 pop song in June. Amanda Ford, creative director at integrated agency Ready Set Rocket, sums it up well: “People know the world we’re living in, nothing we see on social media is really authentic and no one is being their real self, so whether it’s a person with a beating heart or a robot, I don’t think it matters anymore.”

We have existed in a blurred world between technology and reality for decades now. Are CG virtual influencers that much of a leap from advertising and marketing tactics previously used, where celebrities were told what to say and models were rebuilt in Photoshop?

“The boundary between science fiction and social reality is an optical illusion,” -Donna Haraway, “A Cyborg Manifesto.”

Brainy Tech: Part 2

How can we hope to connect our brain to the internet, let alone transfer our consciousness to the cloud, if we don’t even fully understand how the human mind works? In Brainy Tech: Part 1 I discussed various groups that have worked towards the goal of connecting a brain to the internet. But how feasible is that really, and what would it look like?

Biological Cartography

IMG_9846In 1986, a group of scientists reconstructed the neural wiring of a roundworm, completing a diagram with 302 neurons and roughly 7,000 neural connections. (You can see the full diagram here.) In 2014, scientists unveiled a one cubic millimeter piece of a mouse brain. That single cubic millimeter was sliced into 25,000 pieces, or one-thousandth of a mouse brain, which is one-billionth of a human brain, to give some perspective as to how far off we are from mapping the first human brain wiring diagram (also known as a connectome). The Human Connectome Project, which was launched in 2009, claimed that with funding, would provide a human brain wiring diagram within five years. But as of November 2018, the project has yet to be officially completed.

Still hopeful for the future of consciousness transfer to the cloud? With a $10,000 deposit, you can be put on the waiting list to have your brain uploaded to the internet by Nectome, a US startup. But the downside is two-fold; preservation of your brain must begin a the exact moment of death, and the process must also be the cause of death. The subject/customer has the blood flow to their brain replaced with embalming chemicals that preserve neuronal structure while slowly killing the patient. A third downside is that Nectome does not yet have a method for reviving or uploading the costly brains it stores. Yet, the company has won two prizes from the Brain Preservation Foundation on the counts of preserving a rabbit’s brain (2016) and a pig’s brain (2018).

headnewWith discoveries and advancements such as these, it’s only natural that we turn to science fiction to compare myth with reality. If you were to one day upload your brain to the cloud or pay for its preservation, how can you ensure the outcome? For example, will your consciousness, you right now, be transferred to an electronic database? Or will your consciousness merely be copied, so that while you die, a replica of you-which-is-not-you lives on, thanks to your sacrifice? Consider these scenarios:

  • Altered Carbon (consciousness transfer): A TV show (and book) set in the 25th century, the human mind has been digitized and can easily be transferred from one body to another.
  • San Junipero, Black Mirror season 3, ep. 4 (consciousness transfer): A digital simulated reality has been created for the elderly shortly prior to as well as after death. Allowed to reside in San Junipero, a nostalgic ‘80s beach town, for a couple hours each day while alive, during death the consciousness is transferred to an online server where they become permanent residents of San Junipero.
  • Soma (consciousness transfer): A video game set in year 2104, your character is a consciousness copy of an original human named Simon Jarrett. Rather than be transferred, your consciousness is continuously copied into new robotic shells to complete various tasks, the copied consciousness unaware that it is a copy because from its perspective it is a transfer.
  • Mindscan (consciousness transfer): A book exploring the concept of consciousness copying. The original consciousness signs a waiver relinquishing their rights and identity, pays a fee, and their consciousness is copied into an android body, which for all intents and purposes assumes its original’s life; claiming all property, finances, etc.

Which scenario do you think would be the closest to our reality, if any? Understanding the full functionality of the human brain as well as a process to transfer a consciousness are pursuits that have lasted over a century.

“Human progress has always been driven by a sense of adventure and unconventional thinking.” -Andre Geim

Brainy Tech: Part 1

The concept of fusing man with machine has been a popular concept for nearly a century in the science fiction community. It makes sense, of course, to consider what a future may be like in which our consciousness is either uploaded to the cloud or is somehow connected to the internet, turning our brains into supercomputers.

Now, after consuming this fiction for years, people want the real thing, believing our next evolutionary step will be a neural connection to technology. But how close are we really to making these sci-fi dreams a reality?

Screen Shot 2019-07-19 at 11.26.15 AMThe timeline of a human brain-internet connection is rather short. In 2017, human brainwaves were streamed live on the internet via a Raspberry Pi (dubbed the ‘Brainternet’) by researchers at the Wits School of Electrical and Information Engineering. Then in 2018, the ‘BrainNet’ (notice a pattern in the names?) sent EEG signals from two human brains to a third person in order to solve a Tetris-like game. There are no instances yet of a brain actually being hooked up to the world wide web and interacting with it, but we have to take this one step at a time. In April of this year, researchers at the Institute for Molecular Manufacturing in California hypothesized that nanorobots implanted in the brain could connect to the internet, but that has yet to be seen.

Screen Shot 2019-07-19 at 11.26.00 AM.pngJust a couple days ago, Elon Musk unveiled Neuralink, a startup company devoted to transmitting data between people and computers, successfully passing trials on rats and moving on to primates with human tests scheduled for 2020 if all goes well. Neuralink’s goal is primarily to assist paralyzed patients by giving them the ability to type with their minds, but Musk believes the technology will eventually expand to everyday use. Experts in the field believe mind-computer technology could be readily available by 2060.

Various creators have portrayed the mind-machine link as either the savior of mankind or the demise of humans, with very few narratives falling somewhere in between those two ends of the spectrum. It seems that our current position on the concept is wary, but intrigued, which for now seems right about where we should be.

“Any technological advance can be dangerous. Fire was dangerous from the start, and so (even more so) was speech – and both are still dangerous to this day – but human beings would not be human without them.” -Isaac Asimov

Dark Patterns

I’m sure you’ve had this experience: you’re doing some online shopping or browsing a website when a pop-up appears. Obviously, every company website has it’s own agenda which is usually to push sales. Because of this, the pop-up you’re now staring at is forcing you in a specific direction: to accept whatever it is the site is pushing you. Take this example from Loft, where the only way you can evade the pop-up is by clicking, “no thanks, I prefer to pay full price”. Obviously I don’t want to pay full price for anything, but I also don’t want to sell my soul to an e-mail marketing chain.

Schroeder_OptOuts_Loft

User experience was created to ensure consumers and users could easily and intuitively navigate around a website, app, game, or any piece of content provided by a company. We all know and love the feeling of when something just works. Although seemingly simple on the outside, the process of reaching that simple outcome is very complex, and I am eternally grateful for the UX geniuses out there who work every day to make the world around us more functional and accessible. But not all UX designers use their power for good, and this is where the concept of dark patterns emerge. Dark patterns break the #1 user experience rule of putting the user first, instead catering towards driving the user to unknowingly make accidental or destructive choices.

“Manpulinks” & “Confirmshaming”

eefd3d79543ef7516a5d65c0798d5769There is a surprisingly large variety in the ways websites try to trick users to the point where I have to give these “dark UX designers” props; although they’re using their powers for evil, their tactics are still very creative. The senate is working to pass a bill that would hinder the use and effectiveness of these dark patterns, but it is to be seen as to whether or not it will be passed. Until then, here are some of the ways websites are trying to manipulate you:

 

  1. Sneaky Moves
    • Some online shopping sites have been known to add additional products to users’ carts without their consent, so the user has to go in and manually delete these items in their cart before check-out.
    • Hidden costs are added under the guise of “care & handling” or other wording of the same sort that is only seen right before placing an order and these additional fees cannot be removed.
    • Similarly to hidden costs, hidden subscription fees can also be added at checkout. These fees can be especially pesky because they may appear as a flat fee, but after looking at the nitty gritty user agreement in 8-point font, you learn that the fee will auto-renew each month and the only way to cancel is to jump through multiple hoops; whereas signing up only takes the click of a button.
  2. Do It Now
    • Irrelevant countdown timers pressure shoppers to confirm their purchase otherwise risk losing a discount or a “place in line” (this one makes me laugh because it truly makes no sense in an online marketplace).
    • Websites are littered with “limited-time only!” or low-stock messages to the point where our eyes barely register them. But when you’re on the fence about making a purchase, it might be the extra coercion needed to convince you to push the “buy now” button.
  3. Shame.
    • Websites will visually gray out or dull their less preferable option (which is usually your preferred option) such as opting out of e-mails or marketing campaigns.
    • Users may be pressured into purchasing a more expensive variation of a product as the most costly version is often pre-selected.
  4. Peer Pressure
    • A user may see other user’s activity live on a website in the form of pop-up messages that say “Emily from Salt Lake City just saved $200 on her order!” or “346 items were sold this hour!”
    • Users may see unclear or untrue testimonials on a website.
    • You might be forced to provide your email address in a pop-up before proceeding to a site, barring you from seeing any content until you’ve given the company what they wanted.

So, next time you’re perusing a website or considering making a purchase, do a mental double-check as to why you’re making that decision. A majority of these dark UX tactics rely on shame and emotional manipulation, and though it may be unjust, it is our job as consumers to make informed decisions and purchases.

“Please don’t go!  | You’ll miss us when you’re gone.  |  You’re going to miss some great deals!” -Unsubscribe e-mail messages


Further reading:

Dark Pattern Examples

Dawn of The Deepfake

The 2020 U.S. presidential election is coming up. Say you’re online, watching a recap of a candidate’s speech. But say the video is fake, and you are unaware of that fact. You just watched a deepfake, an AI-synthesized video created from bits and pieces of real media, and one of the biggest, most underrated threats in the 2020 election.

2020-candidates-deepfakes-1200x630.pngThe term “deepfake” was coined in December of 2017 by a Reddit user in the thread r/deepfake, a place where creators could post their made-up content, most of which involved the creation of fake pornography. The thread was banned in 2018, although a clean deepfake thread has reemerged on the site, consisting of made-up political hoaxes, fake news, and celebrity videos. Deepfakes have since been banned on other websites, although upholding the ban proves difficult. As this internet community grew, so did the deepfake’s academic counterpart: in 2017, computer scientists at academic institutions across the globe were studying computer vision, which focuses on AI and the processing of video and imagery in computers. As both communities advanced, so did their technology.

Deepfakes are nearly indistinguishable when compared to their real video counterpart. Just last month, a deepfake of Democratic candidate Nancy Pelosi circled the internet making her appear to be drunk in an interview. (You can watch the video here and learn more about how it was created.) Yet, presidential candidates don’t seem to be worried about deepfakes. Axio, a cybersecurity assessment and protection company, reached out to all current presidential candidates with only nine Democrats as well as the Trump campaign responding, none of which expressed concern about deepfakes nor did they have a contingency plan in place if they were to surface. They argued that it should not be their responsibility, but rather fall into the hands of the Democratic National Committee, the FBI, or media and journalism companies.

599fbe9afe7d39ab38775e4e3607b90c.jpgAre they right? Should their campaign managers monitor the internet for deepfakes? Should it fall into the hands of another to keep the lid on deepfakes? Governments across the globe are treating this fake media as a real threat to security; China is considering banning them altogether, and the U.S. is invested in countering them. Before high-quality deepfakes emerged in 2017, researchers have been warning officials that these computer-generated human syntheses would undermine public trust in the media.

In 2018, numerous apps were launched allowing people without a computer science background to make their own fake videos. These apps use artificial neural networks and massive graphics processors to learn from existing images and videos to identify each phoneme (unit of sound) the speaker makes as well as what they look like as they speak each one. There are approximately 44 phonemes in the English language, and according to researchers at Stanford University, it takes a 40-minute video source for the AI to gather enough information to make a realistic deepfake capable of saying anything.

In the past, video has proven to be a sound source of evidence and basis of facts. As we enter an era with increased technology and AI capabilities, we must consider the ethical, social, and political consequences of deepfakes in order to preserve truthful journalism and honest content creation. If a time comes where we can no longer trust our eyes and ears to distinguish between the real and the fake, what will that mean for our society?

“The days are not far when AI will also control the politicians and the media too.”
-Amit Ray

Digitized Free Will

You can’t stop thinking about buying a new pressure washer after your old one broke, and now you keep seeing ads for pressure washers on Facebook. Your sister’s boyfriend keeps popping up as a “suggested friend” on Instagram even though your sister has no social media accounts. The Google ads on the righthand side of your search are promoting foods you consistently buy at the grocery store. You start to think that maybe, just maybe, the mic in your phone is listening to you, because how else could so many corners of the internet have such detailed records on you?

It’s actually much more simple and automated than you think.

Artificial Data

9812b16fd891bcf1d87b002b4ac8ca4bYour digital footprint is much more visible than you think it is. Ads and content are steered towards us based on algorithms from other search results or clicks across the World Wide Web across all of our smart devices. Nowadays, with the amount of sites that require us to create accounts, agree to user policies, or provide other kinds of information (location, whether accessing a site via phone/tablet/computer, etc.), privacy has become more about controlling the data collected on us rather than stopping data collection, which is very close to impossible.

This poses a long list of ethical questions that the U.S. has all but bulldozed past. By 2065, it is estimated that there will be more Facebook accounts created by deceased than living people. What will happen to their data? Technically they agreed to a user policy when they originally signed up for Facebook, but they can no longer revoke permissions or delete their account if the Facebook policy or access changes following their death.

The European Union passed the General Data Protection Regulation last May, which allows internet users to ask what information a company has stored about them as well as request for it to be deleted. Users can also report companies or services if they feel their data is being misused. California has followed suit and passed a similar law that will go into effect in 2020. Responsibility still falls on individual users to keep tabs on how companies use their data, but at least the general public in those geographical areas will finally have some more control over their privacy, if they so choose to.

‘I Agree’

We have all seen the ‘I Agree’ button bombard our screens more times than we can count, whether it be a pop-up when you visit a site or at the end of a lengthy user policy. But where is the ‘I Disagree’ button? Or the ‘Can I Negotiate This’ button?

11f7f359c9566dd74e4c130034ce9fe0On average, you would have to spend 76 working days reading all of the digital privacy policies that you agree to in the span of a year. Just reading Amazon’s terms and conditions out loud takes 9 hours. Not only are user agreements difficult to read, but also why would anyone read these when we feel like we aren’t given much of a choice aside from ‘I Agree’? We can’t negotiate the policy at all, and if we want to use websites, social media, or other services, it’s their way or the highway. User agreements are a large component to our digital footprint that often goes unnoticed. Agreeing to privacy and user policies does not equate to consent, as these clicks are uninformed and non-negotiated.

In the last few years, the internet has transformed from a social novelty into a social necessity. “Just don’t go online” or “Don’t agree to anything” doesn’t cut it anymore. Perhaps the rest of the country will soon follow California’s lead, but I doubt it. Maybe we’re headed towards an Orwellian 1984 of digital surveillance and monitoring. For right now, it seems like we’re residing between the two.

“Data is powerful and can inform on us in unexpected ways.” -NYT Editorial Board

Pushing Buttons

The year is 2006, my dad and I are inside the Mission: Space ride, a NASA-style training shuttle simulator, in Disney World. The ride hasn’t even started yet, but we’re pushing any and all buttons around us. Thirteen years later, that’s what sticks in my mind the most when I think about Epcot, and I realize yet another psychological trigger that Disney has nailed really well: we all want to push buttons. Especially red ones.

We’re all familiar with the generations-old pop culture trope to not push the big red button. And we’re all familiar with that sick sense of curiosity followed by a strong urge to press it.

“That Was Easy.”

Two-ButtonsAt the start of the 20th century in the midst of the Second Industrial Revolution  French nobleman Marquis de Castellane, appalled by the emergence of the push button, remarked, “Do you not think that is prodigious diffusion of mechanism is likely to render the world terribly monotonous and fastidious? To deal no longer with men, but to be dependent on things!” (If only he could see us now.) Buttons were suddenly a magical gateway to alerting others of fire, honking car horns, and opening elevators. The act of pushing a button came to signify comfort, convenience and control, while those wary of technological advancements in the early 1900s viewed it as alienating or a sign of lacking skill.

Fast forward 100 years: hundreds of thousands of people are paying $6.99 for a red button from Staples that when pressed, says “that was easy.” It serves no purpose other than the fact that it’s fun to press. I myself got one for Christmas the year it hit the market. Similarly, crosswalk buttons and certain elevator buttons act as a placebo that gives the illusion of choice but in reality do nothing. Our world is full of functional and useless buttons, but they’re all fun to press. There are even apps and websites where people can tap or click on a digital red button.

The Kill Switch

buttonz-01.png

Historically, it’s difficult to pinpoint the origin of the big red button. In Cold War era it was used as a kill switch or abort function, very different from how the red button is portrayed in cartoons and games where it often serves as a self-destruct button, red alert, or missile launch. Other forms of the red button today take the shape of emergency breaks, fire alarms, and nuclear power plant “scram switches” that cause cooling rods to plummet into a reactor to prevent nuclear meltdowns. All of these prevent disasters from occurring, contrasting the image cartoons, movies, and games have painted in our minds as to what red buttons do. But, despite what we read about red buttons, pop culture has conditioned us to view them as something not to push, therefore we must always push it.

 “Ooooooh! What does this button do?” -Dee Dee from “Dexter’s Laboratory”