Lost amidst the pandemic and civil unrest? AI that can collapse the economy arrived 10 years early.
Another economic shockwave is going to hit its stride in 2 years or less. 12 examples of jobs that will be affected, and 12 questions we should be asking. We need an AI plan as badly and urgently as we needed a pandemic plan.
Five years ago, Tim Urban published The AI Revolution: The Road to Superintelligence. Two images have always stuck with me from that post.
Tim’s timeline of AI’s inevitable impact on human progress. It’s not a matter of if, but when the spike happens. Most people agree on this. What has been up for debate is whether we’re 5 years away, 20, or 50.
Which brings us to his follow-up point: what does it feel like to be standing in front of progress the likes of which we’ve never known? Pretty ordinary, actually.
We can’t see what’s to the right. Not only this, but humans are bad at predicting the future or anticipating change.
Why do I bring up these graphs? Because I think we’re going to look back at GPT-3, the new AI everyone’s been talking about the past month, as the first time we could begin seeing the curve change before us. The first tangible shadow of something dramatic.
It’s been just over a month since the beta of GPT-3 was released. I understand that people are busy surviving the present, between COVID-19 and civil unrest, to worry about much else. I nevertheless am surprised and concerned with how little it broke into mainstream discourse, and how quickly it faded from the news. This is not just another technology story, and I am convinced those who view it as such will regret it.
People have a habit of waking up to massive change when it’s already too late. Individuals with their careers, leaders with their businesses, politicians with their economies. First it’s dismissed as fringe. Then it washes over everything.
The very best examples of what GPT-3 can miraculously do are included below, followed by what I also have not seen enough of: some of the questions we should all be asking — ethically, economically, and from a policy perspective.
A Quick Recap: Who and What
OpenAI is an artificial intelligence research laboratory. They’re the ones behind the new AI named GPT-3.
OpenAI was founded in 2015, in San Francisco, by Elon Musk (Tesla/SpaceX), Sam Altman (Y Combinator), and some other big names who collectively put in $1B to fund its work. In 2019 Microsoft contributed another $1B. The stated goal of OpenAI is to promote and develop friendly AI in a way that benefits humanity as a whole. For context, the year before it was founded, Musk posted the following on Edge.org:
“The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like DeepMind [Google’s AI research laboratory], you have no idea how fast. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most.”
We’re now just past half way through that 10 year timeframe.
So what is GPT-3?
GPT-3 might sound like the name of an Android from Star Wars, but it’s really a neural network–powered language model — a computer program that, in layman’s terms, takes a load of text and then “speaks” by guessing what the next word [and each word after] is most likely to be.
Generative Pre-training Transformer 3 (GPT-3) is an obscure-sounding mouthful that largely manifests itself as a sort of [frighteningly smart] chatbot. You send it messages. It replies.
— Simon Pitt, for OneZero
That’s the core of it, but it’s also not limited to language. GPT-3 exemplifies something called Transfer learning: you can use it for an entirely different task. In GPT-3’s case, it can analyze and correctly complete images, for example.
For context, GPT-2, its predecessor, was released in February 2019, trained on 1.5 billion parameters, and deemed too dangerous to release by OpenAI, citing “safety and security concerns.”
Specifically, concerns around how it could be used to generate convincing fake news, and flood every corner of the internet with it, from tweets to your inbox. Given close to 40% of internet activity is actually bots, this is a legitimately terrifying possibility, especially since more than half of them are malicious already.
Well, here we are just over a year later, and OpenAI has presented us with GPT-3, trained on over 100 times more parameters than GPT-2 — 175 billion of them.
While that should frighten anyone who was already rattled by GPT-2, I think that’s missing the point.
AI will upend the world, and it doesn’t need to be malicious to do so.
Mindless, and paradigm-breaking
Every prediction about AI taking all our jobs in 10, 20, or 50 years, leaned on us figuring out something closer to AGI: Artificial General Intelligence. That is, an AI that can think for itself, like a human.
GPT-3 is not AGI. Very far from it. Again, GPT-3 is just an automated guessing machine. A really, really good one. It guesses the right string of words — or pixels — based on the directions or questions you gave it, based on all the data it has. There is no actual thinking going on.
But here’s the thing: how much of our human work requires that much original thinking? How often are you on some form of autopilot?
Essentially, anything that isn’t creative, is instead a matter of inferring answers using available data. And GPT-3 uses something like the equivalent of all publicly available English language human knowledge to make its guesses.
More importantly, what if a machine can replicate self-awareness, even if it’s not actually thinking? Below, a question and answer with GPT-3 by Sonya Mann:
Which brings us to the paradigm-shift which changes everything:
“For me, the big story about #gpt3 is not that it is smart — it is dumb as a pile of rocks — but that piles of rocks can do many things we thought you needed to be smart for. Fake intelligence may be dominant over real intelligence in many domains.”
– Anders Sandberg, Senior Research Fellow at Oxford University
When I was in Junior High, I had a delightfully eccentric music teacher named Mr. Albert. He had a few sayings he would spontaneously bark at us — nuggets of wisdom, unrelated to music — in the hopes of helping us commit them to memory. There is one I am often reminded of:
“Pattern recognition is intelligence!”
What sets most of the highest performers apart? They are highly skilled at spotting, emulating, and combining patterns of success most of us don’t see. In our efficiency-driven, largely conformist society, you can easily argue economic success is 95% pattern recognition, and 5% bold originality. Most jobs don’t ask us to be original.
GPT-3 is highly adept at pattern recognition, and draws on more data than any one person could ever hold in their mind. It doesn’t matter that it’s “mindless”, as some dismissively labeled it: what this tells us is that AI doesn’t need to be truly intelligent — in the Artificial General Intelligence sense — to make people widely redundant for most types of modern work. The implication of this can’t be overstated. It means redundancy on a scale that would logically mean the collapse of the economy as we know it is suddenly imminently possible. It means taking timelines that were 20 years out, and bringing some of them right up to the coming few years. Are we even remotely ready?
Fake intelligence may be dominant over real intelligence in many domains.
12 common jobs / entire lines of work GPT-3 can start substantially augmenting right now.
Teaching, coaching and consulting, counselling and mediation, healthcare, law, journalism, creative writing, web design, graphic design, marketing, coding — to name just a few. This is an economic iceberg. The true magnitude lurking beneath the water is hard to overstate.
To add a layer of awe, remember that all of the examples below were each created in a matter of days, in some cases hours, merely as “side projects”, using the power of GPT-3, which is itself just scratching the surface of what its model can theoretically do. These were all made within the limited beta testing environment OpenAI released. What will happen when the entire world is given access in the coming year?
Also remember that this is just the beginning, and GPT-3’s inevitable successors, and brothers and sisters, will come quickly and be significantly better. The size of state-of-the-art language models is growing by at least a factor of 10 every year. It took only one year to get from GPT-2 to GPT-3, and as some have pointed out, GPT-3 isn’t even close to what it could be! It uses “magnificently obsolete architecture” and is “trained in the dumbest way possible”. And yet…
1. Teaching / tutoring / education-related jobs
… including dependent industries, like textbooks or educational software.
How will AI like GPT-3 impact teachers or tutors? Or test-prep or summarization services?
Both Paras Chopra and McKay Wrigley have already each created bots with GPT-3 that can answer any question, pretending to be any famous person.
Here is GPT-3 pretending to be Richard Dawkins.
McKay is already turning it into a service called Learn from Anyone, and has already collected $20,000 from early subscribers, in less than a month.
And for knowledge search and acquisition, here’s an example of a search engine, using GPT-3 entirely, that makes Google seem primitive. Imagine if instead of a bunch of links, Google gave you the exact answer in plain language and the single most relevant link explaining it?
For example, what if your homework was to find and succinctly capture the thoughts of history’s great philosophers? Pretty hefty assignment, right? Enter GPT-3:
2. Coaching and consulting jobs
… of every kind. If people have put “how to” info about the task online, it’s likely GPT-3 has the aggregate of all such advice within its parameters — within its base of global knowledge.
Here is a game developer, Joshua Thomas, having a discussion with GPT-3 to help him figure out how to progress with his current game project. Joshua is the “Researcher”, “Wise Being” is the name he gave GPT-3. Some highlights:
Researcher: I’m having a difficult time determining what mechanics I’d like in the game. I’m having a bit of a writer’s block coming up with fun game mechanics that I also enjoy. [Goes on to explain in detail his thinking so far]
Wise Being: Interesting idea. I also think this idea could lead to many interesting gameplay mechanics involving time travel.
Researcher: Such as?
Wise Being: Well, what if time travel is an actual mechanic in the game. For example, the player could time travel by getting the grim reaper kid to open up a portal to the past, and then the player could access past versions of levels.
Researcher: I like where this is going.
Wise Being: Another idea could be that the grim reaper kid is able to see the past and future, and he can communicate with dead people. Perhaps the player could communicate with dead characters in the game through a dialogue tree. This would allow for the player to solve puzzles involving the ghosts of dead characters who hold information that is necessary to complete the game.
Here is the full exchange. It’s fascinating.
In other examples, GPT-3 directs you how to use it to produce a bestselling book, use graphic design to make people read your posts on Twitter, generates resumes for you, and can write a presentation for you.
What about investment advice? There are no limits. Via WIRED:
Delian Asparouhov, an investor with Founders Fund, an early backer of Facebook and SpaceX cofounded by Peter Thiel, blogged that GPT-3 “provides 10,000 PhDs that are willing to converse with you.” Asparouhov fed GPT-3 the start of a memo on a prospective health care investment. The system added discussion of regulatory hurdles and wrote, “I would be comfortable with that risk, because of the massive upside and massive costs [sic] savings to the system.”
It’s worth acknowledging that the best consultants are experts and are sometimes using information that isn’t public yet. They will not be as affected by GPT-3. Those that don’t have meaningful proprietary information, however, won’t have this defence. It’s also worth remembering that GPT-3 can be provided the information it needs, in addition to the set it was trained on, meaning it can theoretically be adapted to step into any situation.
3. Counselling and mediation jobs
In some examples, users ask GPT-3 about God and the meaning of life.
Nick Cammarata, a safety researcher at OpenAI, confided in it to a degree he didn’t feel comfortable sharing publicly. Instead he shared this sample (“John” is GPT-3 in this exchange).
He also shows how he has been using it to help with gratitude journaling.
In another case, a user leveraged GPT-3 to better understand another person’s perspective and thinking.
4. Law / Legal jobs
6. Healthcare jobs
In this example, GPT-3 is asked an intentionally difficult medical question. It successfully chooses and explains the correct option.
7. All writing jobs (journalism, creative writing, etc.)
This is GPT-3’s bread and butter.
First, let’s start with the fact that it can help you write to begin with. To write better, or to get through writer’s block.
Stuck on ideas? You can also have GPT-3 kickstart things for you.
But that’s just the basics. GPT-3 can really write, and do it well.
Mario Klingemann (whose Twitter feed I highly recommend) also directed GPT-3 to write about Twitter, in the style of Jerome K. Jerome, the 19th century English writer and humorist, best known for his comedic travelogues. A sample:
If you’re familiar with Jerome K. Jerome’s writing, the above should have made you laugh knowingly. AI made you laugh.
As Guardian journalist Hannah Jane Parkinson wrote about GPT-2 already,
It will quash the essay-writing market, given it could just knock ’em out, without an Oxbridge graduate in a studio flat somewhere charging £500. It could inundate you with emails and make it almost impossible to distinguish the real from the auto-generated. An example of the issues involved: in Friday’s print Guardian we ran an article that GPT2 had written itself (it wrote its own made-up quotes; structured its own paragraphs; added its own “facts”) and at present we have not published that piece online, because we couldn’t figure out a way that would nullify the risk of it being taken as real if viewed out of context.
Seeing GPT2 “write” one of “my” articles was a stomach-dropping moment: a) it turns out I am not the unique genius we all assumed me to be; an actual machine can replicate my tone to a T; b) does anyone have any job openings?
Here is an entire site dedicated to GPT-3-generated creative fiction, which has already amassed over 5,000 newsletter subscribers.
If you’re still not convinced, I highly recommend, you read this blog post, to the very end, or take a browse through this: a fake blog produced entirely by GPT-3. Oh, and guess what? Its content performed really well:
8. Web Design jobs
Jordan Singer built a Figma plugin that designs for you. He calls it “Designer”. You just need to describe the design you want.
9. Programming/Coding jobs
Describe the app you want, GPT-3 can generate the code.
Here’s another by Sharif where you describe the layout, and it generates the code for you. Here’s another where GPT-3 helps you interpret and improve code.
10. Graphic design / visual editing jobs
Coming back to the point of mindlessness, for a second:
what differentiates this un-thinking, automated work, from the labour of somebody with great creativity, imagination, and artistic competencies?
11. Marketing & Social Media Marketing Jobs
Given GPT-3 competence at writing and imagery, marketing is an easy target. While more complex tasks are already threatened, so are the smaller, time-consuming ones: Sushant Kumar built a generator using GPT-3 that pumps out usable, tweetable quotes given a single word as a prompt:
12. Research and data entry
Now imagine how this scales as it integrates with other tools. Here is an Excel function which can infer relationships between cells:
Goodbye many, many data-entry and analysis jobs.
12 questions we should be more earnestly asking, debating, and preparing to solve.
The fact that GPT-3 still has its flaws is no surprise. However it would be shortsighted to use this as a reason to dismiss it. Given all of the examples and context above, it is more than likely GPT-3, or AI like it, will transform our economy within the next few years. We should be using this time to craft policy and standards that prepare us, rather than scrambling when the wave is already here.
In some ways, it is no different from pandemic preparedness.
1. Why is OpenAI suddenly okay with providing access to this?
First and foremost: one cannot help but think it warrants closer scrutiny that OpenAI said GPT-2 was too dangerous, but only months after adding a for-profit arm to its operations, was suddenly okay with making available something significantly more capable and dangerous. GPT-3 was released less than two weeks before the introduction of the OpenAI API to commercialize its AI.
2. Should all AI output be labeled, by law, as AI-created or AI-augmented?
How can we be sure what is real, otherwise? In one of the articles I suggested above, one discovers that the entire thing was written by GPT-3. What if I told you everything written until this section was also written by GPT-3? What about comments sections?
Just like people are underestimating how a “mindless” AI can still turn the economy upside down, it’s also easy to underestimate the dark potential of seemingly positive aspects of GPT-3. For example, it can help you seem more likeable. Great. Today, it’s pretty easy to spot a troll or bot which is overtly controversial. What if it’s trained to be as subtly suggestive as possible, gradually fooling far more people? More benignly, if you use it to write better email, or endear yourself to others, should they be warned?
In this regard, OpenAI, and whomever has the next breakthrough, will have the world at their mercy: they are the only ones who have the visibility to tell us whether something was a byproduct of their system. (See point 12 for the dystopian implications of this.)
3. Will this be the straw the breaks the camel’s back with regards to social media and regulation of the digital commons?
The internet is the nervous system of our modern society. We’ve already seen it tear our social fabric apart with Fake News and social media manipulation. What happens if that scale can be increased 1000-fold and made more difficult to detect? If people are already so quick to call the actual news “fake news”, will anything be believable anymore? At its most extreme point, how can society function when nobody knows what to trust?
As Renee DiResta wrote in WIRED,
This wouldn’t be the first such media inflection point where our sense of what’s real shifted all at once. When Photoshop, After Effects, and other image-editing and CGI tools began to emerge three decades ago, the transformative potential of these tools for artistic endeavors — as well as their impact on our perception of the world — was immediately recognized.
Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. […] We will have to adjust, and adapt, to a new level of unreality.
But synthetic text — particularly of the kind that’s now being produced — presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative — or even real. In psychology, this is called the majority illusion.
What happens when the people behind fake news become impossibly difficult to spot as fake? Like shell corporations for reality instead of for money laundering.
4. Will people stop sharing their information and insight online?
The fact of the matter is that GPT-3 derives its power from humanity’s collective knowledge. Its parameters are based on Common Crawl — a broad scrape of the 60 million domains on the internet along with a large subset of the sites to which they link — as well as Wikipedia, and historical books. If these same people begin losing their jobs as a result of insight they contributed online, will they still want to? Will there be a revolt when this becomes common knowledge? Will this open a new wave of IP protection issues? Will people be able to legally demand their insights aren’t included in an AI’s dataset? Could people choose to collectively cripple GPT-3’s dataset? Should they?
5. What happens when coders — especially entry to mid-level — are the new coal miners? Are we ready for the economic reality of that?
As an increasing array of jobs either disappeared or became precarious, we rushed to teach our kids computer science, and coding bootcamps appeared everywhere as everyone felt they had to learn to code to remain employable. What happens when all that work, and those avenues, are made largely redundant too? How will social welfare systems keep up? Is it time to start taking UBI (Universal Basic Income) much more seriously?
Any good programmer will tell you that with the rise of “no-code” solutions, those coming out of coding bootcamps are walking on increasingly thin ice. GPT-3 arguably melts what’s left.
Furthermore, what happens when there is infinite coding capacity in terms of the cascading impact of technology?
6. People always talk about unemployment, but underemployment is the more telling indicator. What happens when all the underemployment shifts to unemployment?
Unemployment shouldn’t have been what we were ever looking for as a deeper warning sign. It’s underemployment, i.e. jobs that are well below the intellectual demands, monetary rewards, and lifestyle stability that the person’s training and capabilities should have generated.
The last 10 years saw a dramatic decrease in stable work. Until now, the digital revolution, and simple automation, did not remove enough labour, so our economies existed in an uncomfortable, unsustainable middle-ground where there was still a lot of human-powered work to be done, but not in a way that satisfies the labour expectations of the population. In 2017, senior editor at the Ecomomist, Ryan Avent, wrote:
The most recent jobs reports in America and Britain tell the tale. Employment is growing, month after month after month. But wage growth is abysmal. So is productivity growth: not surprising in economies where there are lots of people on the job working for low pay.
This is a critical point. People ask: if robots are stealing all the jobs then why is employment at record highs? But imagine what would happen if someone unveiled a robot tomorrow which could do the work of 30% of the workforce. Employment wouldn’t fall 30%, because while some of the displaced workers might give up on work and drop out of the labour force, most couldn’t: they need the money. They would seek out other work, glutting HR offices and employment centres and placing downward pressure on the wage companies need to offer to fill a job: until wages fall to such a low level that people do give up on work entirely, drop out of the labour force, and live on whatever family resources they have available…
We now, very realistically, have that “robot”. It may not look like what we imagined, or be as sophisticated as we thought, but the first version of it is undeniably here. Even if it needs a few more years of polish, that’s only a few more years we have to prepare.
If anything, the pandemic has accelerated the demand: it has created an acceptable excuse to eliminate human interaction from as many service experiences as possible.
The central aspect people tend to overlook when arguing that automation just creates other jobs to replace the ones it eliminates, is that previous automation could not create other automations, or be so broadly and flexibly applicable. We also forget that the vast majority of automation was mechanical, and then rote information automation. None of it was credibly intelligent. Once intelligence happens, it overlaps with people and the value they add by thinking. Thinking, or at least what seems like thinking, as GPT-3 has shown us, is no longer job protection. The versatility of new automation negates the supposed creation of sufficient new jobs.
So, on the subject of widespread job elimination…
7. Should companies be forced to pay all or some portion of the money an AI generates, to the person the AI either directly or indirectly replaced?
As has been pointed out, AI will be based on knowledge we provided.
The inverse of people not sharing their insights online anymore: should people be compensated by the amount of valuable information they provide, and taxed for information pollution — so much of what we find on the web today — that hurts AI’s ability to provide optimal answers?
8. What will become of our social life, when AI becomes a reliable companion, or replacement? How will we ensure the isolation and echo chambers of social media aren’t made even worse?
This is especially relevant during a pandemic, where isolation is the norm.
Replika is a company that has flown somewhat under the radar, given how broadly it’s used and how much it seems like something out of a Black Mirror episode. As Mario Gabriele pointed out:
Created in 2015, Replika provides a sympathetic texting partner, designed to serve as a digital therapist. But for many of the company’s 500K monthly active users, Replika is too charming to resist
There are signs we may already prefer their company: research on Microsoft’s XiaoIce indicated that conversations with the chatbot last longer than human-to-human interactions.
9. What happens to how we learn, and how we are expected to retain or apply knowledge? What will be an acceptable standard?
We have already seen a dramatic shift in pedagogy in schools and universities over the past 20 years, where the internet has made it unnecessary to commit so much to memory, or understand systems as thoroughly. One can simply look it up, or it’s assumed a common software can deliver the solution.
What happens when answers of every kind are available to anyone so easily and so quickly? How will schools monitor cheating or fight AI-powered homework helpers? Should schools lean into this new help?
People often comment on how full of knowledge Gen Z is because of how quickly they can find anything, in a similar fashion to millenials compared to boomers. The generation that grows up with GPT-3 and what comes after it will create a gap in learning and knowledge like we’ve never experienced before. The internet democratized access to information. AI will democratize synthesis of information and answers.
10. What will jobs look like? Will there be such a thing anymore? What will happen to our sense of purpose?
If we take a closer look, it’s not so much “jobs” that will be made redundant, but many types of tasks. Some industries — especially those dependent on both highly specialized knowledge and physical tasks, like surgeons, for example — will remain defensible for longer. On the other hand, those who derive their value almost entirely from completely digital, information-related tasks, will be made especially vulnerable. Today’s white collar is truly becoming yesterday’s blue collar.
That’s all to say, if all the information-related, digital tasks are made redundant, what will be left? It certainly won’t be enough to justify anything even close to a 5 day work week for most people, nor enough to justify one employer. What does an economy look like where the majority are gig workers? Is the Passion Economy the answer, and should more be invested in securing the infrastructure for it? Again, is UBI a necessary component of this, at scale?
Furthermore, how will people remain motivated, constructive, and healthy with so much time on their hands, or the lack of urgency around their work. The negative habits often associated with unemployment are a legitimate concern. This is an ongoing debate, often associated with UBI, and one that needs serious policy consideration as well.
11. Should we allow ourselves to be governed by an objective, rational AI? Especially as confidence in politicians around the world craters?
In a way, you could argue AI is the ultimate democracy, because its recommendations are based on collective human knowledge - and can be tuned to amplify the most vetted facts. In some ways, GPT-3’s version of unthinking, mindless intelligence is actually an asset. It cannot be swayed emotionally like people can. Would you trust it more than today’s human leaders?
12. This short story written by Max Tegmark, previously considered science fiction, we now know could be hypothetically possible in the next 5 years. How do we prepare, from a policy perspective?
This should be mandatory reading for anyone who wants to appreciate how global domination and the collapse of governments can be achieved through AI.
AI this powerful can be an opportunity, or a catastrophe. How seriously we prepare will decide whether it’s the former or the latter.
We can see all of the above as a terrifying dystopian reality, but this is almost entirely based on our fear of change and our assumptions around what is normal and what is possible.
Instead, we can choose to see the opportunity, and ensure we are in a position to enjoy its benefits: with the right safety net and infrastructure in place, spare time can become a blessing rather than a curse. More explore or relax time than I desperately needed this money time. What does it look like to be in a world where you only need to work 2 or 3 days a week, at most, without starving? Or, as a result, one where you can pursue your passions, without worrying whether you can make a living off of them?
This all hinges on whether the coming economic shockwave will be used to consolidate power and make the rich richer, truly leaving the rest with nothing; or whether the new wealth will be distributed evenly and fairly to create a new level of prosperity and satisfaction the likes of which we have never known before.
And this all depends on asking questions early, demanding they be taken seriously by corporate and government leaders, and putting the right policies in place now instead of waiting until the change is fully upon us. We really don’t have much time to work with.
Thanks for reading! If you found this useful or thought-provoking, please give it some applause and/or pass it along so these questions can begin to be discussed more broadly. Lastly, if you’re an AI beginner, and wish you had a better understanding of the basics, here is a primer I wrote just for folks like you.
My name is Mario. I’m Co-Founder of Readocracy.com. We give you credit for all the reading you do online, let you learn from it (Fitbit for your information diet), and let you present the best of it in a beautiful, verified profile and stream others can follow along with. In a world heading toward automation, it allows you to showcase your singular set of knowledge and passions. Here is mine. Why? We’re on a mission to turn the internet into a meritocracy (free speech ≠ free reach), for the good of society, by making attention to information count as social capital.