<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[OUTPOST]]></title><description><![CDATA[Communities controlled by users, not big tech]]></description><link>https://www.outpost-protocol.com/</link><generator>Ghost 3.1</generator><lastBuildDate>Tue, 05 May 2026 16:18:30 GMT</lastBuildDate><atom:link href="https://www.outpost-protocol.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[The Next Generation of Media]]></title><description><![CDATA[The next generation of media is going to allow anyone to easily own their content distribution.]]></description><link>https://www.outpost-protocol.com/blog/the-next-generation-of-media/</link><guid isPermaLink="false">5ef8c00882d041001ec067f9</guid><dc:creator><![CDATA[Sam Hatem]]></dc:creator><pubDate>Sun, 28 Jun 2020 16:09:39 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Media is changing fast. Before the internet, communication channels were really expensive. You used to need a newspaper or magazine to be able to publish your writing. Now you just need a Medium account. You used to need a production company to distribute videos. Now you just need a YouTube channel.</p>
<p>The web allows anyone with internet connection to cheaply distribute all forms of content. It took power away from centralized companies and gave it to the talent themselves. Now <a href="https://www.barstoolsports.com/">Barstool Sports</a>, <a href="https://www.youtube.com/channel/UCmh5gdwCx6lN7gEC20leNVA">David Dobrik's Vlog Squad</a>, <a href="https://www.youtube.com/channel/UCkhxWF5CTMUgxneqAFP96LQ">Nelk</a>, and <a href="https://twitter.com/blakeir/status/1257826474347900928">dozens</a> of other companies are changing media from the bottom up.</p>
<p>These new companies are all about the talent themselves. By removing the traditional barriers to entry, creators have been able to directly organize together instead of being forced to join a traditional organization like the New York Times or CBS. In doing so, they can create the content they want to create, not what some big company wants.</p>
<p>Unlike traditional media, though, new media companies often doesn't own their distribution. They rely on YouTube, Medium, Spotify, and other big tech companies for it. This is fine if you are Joe Rogan and get offered a <a href="https://www.wsj.com/articles/spotify-strikes-exclusive-podcast-deal-with-joe-rogan-11589913814">$100 million contract</a> from Spotify, but other creators, even those who consistently get millions of views on their content, haven't faired as well. For one reason or another, they get <a href="https://twitter.com/NelkFilmz/status/1249022597607669762">censored</a> and <a href="https://www.theverge.com/2019/4/5/18287318/youtube-logan-paul-pewdiepie-demonetization-adpocalypse-premium-influencers-creators">demonetized</a>.</p>
<p>We think that in the next generation of media creators will have more power. They will be able to own their distribution. They will get to decide who they publish content with. At the click of a button, they'll be able to create a new publishing company and add others to it. They'll control the quality of content on their platform, giving them more credibility than being lumped together will all other Medium writers or YouTube creators. In other words, they'll be able to create organizations that can develop the prestige of traditional media companies. They'll start to capture the majority of value they provide, too.</p>
<p><em>Another great piece on the next generation media that we were influenced by:<br>
<a href="https://medium.com/@jarroddicker/the-next-media-business-talent-reputation-and-lessons-from-record-labels-e14b695c43b">The Next Media Opportunity: Talent, Reputation and Lessons from Record Labels</a>.</em></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[On Content Moderation]]></title><description><![CDATA[There are better solutions to content moderation.]]></description><link>https://www.outpost-protocol.com/blog/on-content-moderation/</link><guid isPermaLink="false">5eca956a859afa001ec34584</guid><dc:creator><![CDATA[Sam Hatem]]></dc:creator><pubDate>Thu, 28 May 2020 08:30:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Recently YouTube caused outrage when CEO Susan Wojcicki <a href="https://www.cnn.com/videos/business/2020/04/19/inside-youtubes-numerous-policy-changes-during-the-pandemic.cnn">said</a> it is against YouTube policy to circulate views contradictory to traditional authorities like the WHO. As Austen Alfred <a href="https://twitter.com/Austen/status/1245689629178650624">points out</a>, though, traditional authorities have spread a lot of misinformation regarding the disease including that there is no evidence of human-to-human transmission, no evidence airborne transmission, and that masks are ineffective. At the very least, traditional authorities have shown that they are fallible like everyone else. Unfortunately, Youtube has taken the easy content moderation route, blindly agreeing with these officials.</p>
<p>YouTube's content moderation policy has made traditional authorities a central source of truth. There are two fundamental issues with only seeing a single perspective on any issue:</p>
<ol>
<li>There is no perfect perspective.</li>
<li>Information can conflict with the self-interest of people and institutions.</li>
</ol>
<p>No person or institution can be exposed to all relevant information on a subject. All perspectives have inconsistencies and lack some information. Additionally, it is human nature to ignore information that is not in our self-interest. It's a huge cognitive blindspot that everyone has. If only one perspective is allowed, that perspective essentially has a monopoly on information in their field, allowing them to use their preexisting influence to further impose their power.</p>
<p>Some may argue that the solution is to remove all forms of content moderation, but we're wary of that argument. Consider an academic journal that just let everyone publish in it. Some research would be great but most would not. Everyone would need to spend a lot more time fact-checking every piece of information, and those who blindly accept information at face value will end up believing in a lot of misinformation.<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> It's not very practical. Nobody can be expected to verify every piece of information they come across. It would be extremely inefficient.</p>
<p>Content moderation is great when it helps us filter information that we may not have the time to verify for ourselves. But these filters can become a problem if they have too much power. If Alice, a prestigious polymath, has a monopoly on academic journals, her journals would have much better research than one without any filters, but it would only contain her perspective. If Alice doesn't like a research paper, it won't get published. When she is wrong, she may prevent important information from circulating. Alice is a single point of failure.</p>
<p>YouTube's fundamental mistake is not content moderation, it is making traditional authorities a central source of truth. In doing so, these organizations also became a single point of failure.</p>
<p>Many of our information sources today are on one of these two extremes; information often either comes from inside traditional institutions, or from sites that are open for anyone to publish. We think a better alternative is to allow for many quality-controlled perspectives, a middle ground between these two extremes. All perspectives contain biases so everyone should ideally expose themselves to many instead of being forced to only see one. Content moderation should be decentralized. There should be many independent decisions as to what is good content and they should come from many different perspectives.</p>
<p>The internet has allowed anyone to publish content, but it hasn't allowed people to easily work together to do so. There is no online mechanism that can easily allow people to organize in new ways. Publishing platforms are often either open to all like Twitter, YouTube, and Medium, or a custom solution with access control like Hackernoon or the New York Times' website. By allowing people to organize together, they can exchange feedback and benefit from each other's reach.<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup></p>
<p>People should be able to create new publishing communities with access control at the click of a button. Communities will be entirely <a href="https://a16z.com/2019/03/02/cooperatives-cryptonetworks/">community owned and operated</a>. Within one, you may have editors and administrators who control what content is published or just who is allowed to publish. Each community will have some sort of guidelines specifying what content is allowed. Creators who violate the guidelines will put themselves at risk of being kicked out. If administrators violate the guidelines, either they'll get voted out, or everyone will eventually switch to a fairer community.</p>
<p>If there is no existing community to publish someone's content, or someone believes they can create a better community than the existing ones, they should be able to easily create one. It's not realistic for someone to set up a traditional research journal, but it is realistic for someone to click a few buttons to create a new publishing community.</p>
<p>Easily allowing people to create communities is similar to Reddit, but our communities will be owned by users, not by some big tech company. We also want to better support longer forms of content and videos. By lowering the cost to create, enter, and exit these publishing communities, the best creators, researchers, and journalists will likely gravitate towards the communities that create the best content and reward their contributors fairly.</p>
<p>A way we like to think about the multiple perspective approach to content moderation is to compare information sources to markets. Right now many insiders often have the equivalent of a monopoly within their field. Insiders can decide what information is valuable, and may sometimes use this power in their self-interest. Similarly, monopolies control what features their goods have and they may use their influence to rent-seek. Monopolies are fine when they are productive, but they inhibit innovation if and when they begin to abuse their power. If incumbents do not abuse their power, people can naturally gravitate towards the best products at the best prices, and companies that fail to adapt to new technologies or new circumstances are eventually left behind. Similarly, we think a protocol can be designed to make it harder for insiders to abuse their influence, allowing people to naturally gravitate towards the best quality information over time. Information sources that fail to adapt to new research should eventually get left behind and the best creators, researchers, and journalists will eventually move to the best communities.</p>
<p>Such a system may be a solution to the single point of failure that is typical in content moderation. With many more organized perspectives, sites like YouTube could incorporate multiple perspectives into their content moderation policies. Or better yet, such a protocol could help people move off big tech platforms altogether.</p>
<p>Ultimately, these are just our predictions as to how content can be better moderated online. We admit that just because this may make sense in theory does not mean it will work in practice. It's time to build.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>It's also <a href="https://en.wikipedia.org/wiki/M%C3%BCnchhausen_trilemma">impossible</a> to verify everything according to the skeptical school of philosophy. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p><a href="https://cdixon.org/2012/07/08/how-bundling-benefits-sellers-and-buyers">Bundling</a> creators together can also be profitable! <a href="#fnref2" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Manifesto]]></title><description><![CDATA[We think people should control the online communities that they are part of, not big tech.]]></description><link>https://www.outpost-protocol.com/blog/clear-rain-101/</link><guid isPermaLink="false">5ebc585a27ecde001ea61db1</guid><dc:creator><![CDATA[Sam Hatem]]></dc:creator><pubDate>Wed, 13 May 2020 21:24:26 GMT</pubDate><content:encoded><![CDATA[<p>There's a huge issue seemingly everywhere online: no accountability. People can post fake news, fake reviews, clickbait, and annoying content with no consequences. Google, Facebook and other big tech want to please everyone so they make us put up with these irritating identities. Big tech changes our news feeds and recommendation algorithms, and nobody at those companies ever accepts blame for making our user experience worse. They haven't been accountable for data breaches either. They always just say it was outside of their control and that it won't happen again. But data breaches keep on happening. Nobody is accountable. Even worse, they get the revenue that we generate for them.</p><p>When we thought about it, the design of social media seems odd. We don't naturally organize in one-size-fits-all systems supervised by corporations. Offline we are part of many different communities that are each naturally self-governed, and those work great. On the other hand, social media from big tech removed all accountability. It makes sense then that everyone seems to behave so poorly.</p><!--kg-card-begin: markdown--><p><img src="https://clear-rain-img.s3.us-east-2.amazonaws.com/online_vs_offline.png" alt="How people organize online and offline"></p>
<!--kg-card-end: markdown--><p>The internet should be an amazing place to create content and connect with others. We want to give it back to the people. Online communities should mirror how people organize offline. People should be able to own the communities that they join. Content creators should be rewarded more for creating content. Everyone should control their data. There should be no confusing terms of service and content should be moderated by community members.</p><p>We want to create a future for the internet that excites us. The internet should be controlled by users, not big tech.</p>]]></content:encoded></item><item><title><![CDATA[The Limitations of Aligned Incentives]]></title><description><![CDATA[For many, aligned incentives are the be-all and end-all of network design, yet we show where aligned incentives are impossible and where explicit incentives are even counterproductive.]]></description><link>https://www.outpost-protocol.com/blog/the-limitations-of-aligned-incentives/</link><guid isPermaLink="false">5e8e7f6e019a36001e0bf2a4</guid><dc:creator><![CDATA[Sam Hatem]]></dc:creator><pubDate>Fri, 24 Apr 2020 01:53:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>The hype around cryptocurrencies like Bitcoin and Ethereum has led to a surge of projects trying to put seemingly everything on a blockchain including social media. Projects like <a href="https://steem.com/">Steem</a>, <a href="https://www.propsproject.com/">Props</a> and <a href="https://www.kin.org/">Kin</a> all use blockchain to reward the users and developers of social systems. They market themselves as realigning incentives between developers, creators, and viewers of platforms so in theory, these sites should reward quality content rather than the attention-grabbing content that is prioritized on platforms like Facebook and YouTube. And because these are protocols that nobody owns, there is no worry that some centralized company will misuse user data. Discussing projects like these the often-cited article <a href="https://knightcolumbia.org/content/protocols-not-platforms-a-technological-approach-to-free-speech">Protocols Not Platforms</a> (Jack Dorsey for example <a href="https://twitter.com/jack/status/1204766087281172480">cited</a> it as an inspiration for Twitter's Blue Sky initiative.) argues that social cryptocurrencies like those mentioned above may be able to both create a sustainable business model, and align incentives between participants on the network.</p>
<p>For many, it seems aligned incentives are the be-all and end-all of network design—including us in our original version of <a href="https://www.outpost-protocol.com/blog/realigning-online-incentives/">Realigning Online Incentives</a>—yet as we argue here, it is not that simple. First, these subjective rewards protocols are inherently flawed due to the impossibility of subjective consensus. Without consensus, social reward mechanisms are gameable. The most problematic way to exploit these rewards may be collusion. We show that identities are often incentivized to collude and that in many cases it is impossible to detect and therefore stop. Finally, we question whether economically aligned incentives would even lead to a healthy online social ecosystem where users create better quality content.</p>
<p>Although we are critical of projects in our space, we wish to do so with good intentions. Because these projects are clouded in esoteric terms and technology, we worry developers, users, and investors have not noticed the flaws that we see. And while we are critical of some projects mentioned here, we would like to note that there is also a lot these projects did well to become reputable. There is a lot that we have personally learned from them and ultimately, we hope that we too will grow large enough to be criticized in blog posts discussing social media. Lastly, we admit there is much more we don't know than what we do. The projects we criticize could understand things that we do not.</p>
<h2 id="theimpossibilityofsubjectiveconsensus">The Impossibility of Subjective Consensus</h2>
<p>Let’s first take a look at how the aforementioned subjective reward protocols attempt to reward quality content and apps on their networks. Generally, these projects distribute rewards at set intervals to the creators, developers, or both based on some metric. In the case of <a href="https://steemit.com/">Steemit</a>, the blogging platform on the Steem blockchain, Steem tokens are minted and rewarded to content creators proportional to the votes on their posts during that interval. The Steem <a href="https://steem.com/SteemWhitePaper.pdf">whitepaper</a> calls this process ‘subjective proof of work’ (pg. 12) referencing Bitcoin’s Proof of Work consensus mechanism. But unlike Bitcoin’s objective consensus mechanism, Steem’s fundamental flaw is that there is no subjective consensus.</p>
<p>Consensus protocols are meant to allow distributed identities to agree on some sort of truth. There are limitations of what different identities can agree upon, though. Consider three people in the physical world sitting around a table with three pencils on it. Everyone agrees on a standard definition for a pencil and that there are 3 objects on the table that satisfy that definition. There is consensus because the number of pencils is objective. Yet now Alice, Bob, and Charlie want to figure out which one is the best pencil. Alice says it’s the first because it is the biggest so it will last the longest. Bob says it is the second because it has the best shade of yellow. Charlie, who considers himself a contrarian, likes the third. They argue for hours and cannot agree on the truth. There is no consensus.</p>
<p>Bitcoin’s consensus works because it is purely objective. The inputs of a transaction either have the same sum as the outputs or they do not. A Bitcoin block either only contains valid transactions or it does not. And a block is either on the longest chain or it is not. Resultantly, there are clear definitions of honest and malicious validators in the Bitcoin network. Just as importantly, Bitcoin’s miners are doing objective work for the network. Miners use their computation power to increase network security. The more honest hash power validating the network, the harder it is to attack. Miners provide objective value to the network and they are rewarded proportionally to the value they provide.</p>
<p>The same is not true for subjective consensus. Everyone will have different definitions of what constitutes good and bad content and therefore it is impossible to differentiate between honest and malicious actions. Therefore there is also no consensus regarding what content and actions may increase the quality of social networks and which do not. Without consensus regarding what is honest behavior, there is no way to detect or stop people from creating different accounts to vote on their own posts or from paying bots vote on them. The latter is <a href="https://medium.com/@heymattsokol/why-i-left-the-steem-blockchain-bb0214a451b8">already happening</a> on Steem and we would be shocked if the former was not, too. The Steem whitepaper even acknowledges and tries to justify this flaw saying, “Eliminating ‘abuse’ is not possible and shouldn’t be the goal. Even those who are attempting to ‘abuse’ the system are still doing work” (pg. 15). They argue they are still doing work, yet there is no way to quantify the value of the 'work.' Resultantly, Steem rewards are gameable.</p>
<p><a href="https://open.spotify.com/episode/1iS3r5EEAEWZvOMCmGokSp">Props</a> and <a href="https://www.kin.org/kre/">Kin</a> have similar issues. They both reward apps proportionally to the number of in-app transactions. Developers can game these mechanisms by setting up bots to send tokens between wallets in their apps. More transactions make their respective tokens more valuable, but they do not signify quality applications.</p>
<p>All metrics regarding subjective consensus are gameable. Imagine if Alice tells a couple of pencil manufacturers that she will heavily reward whichever manufacturer can make the best pencil. As soon as the manufacturers discover that Alice likes big pencils, they will focus on maximizing that metric. They can skimp on other quality measures and be rewarded nonetheless. And there still will not be any consensus that these pencils are better. Simply stated, subjective consensus does not exist, and any attempts to measure subjective value with metrics are gameable.</p>
<h2 id="thecollusionproblem">The Collusion Problem</h2>
<p>Subjective reward protocols work in theory under the assumption that all individuals are acting independently and purely in their self-interest. If everyone acts independently, then everyone will only vote on content that they like. Yet the ability to pay bots to vote on content brings up a larger problem with subjective reward protocols: the problem of collusion. There is no way to stop people from working together in distributed systems. It happens on Bitcoin in the form of mining pools so that miners can get more steady returns, yet mining pools do not undermine Bitcoin consensus. In the case of social rewards protocols, people can work together to get a disproportionate share of the rewards.</p>
<p>Even more common than payments as collusion will be coalitions as collusion. In most cases, coalitions are not considered collusion, but because we show they can undermine subjective reward systems, we refer to them as such. As Kevin Simler and Robin Hanson discuss in <em>The Elephant in the Brain</em>, all humans naturally form coalitions in their self-interest. It already happens on social media. Successful YouTubers often do collaboration videos with each other. Successful podcasts hosts often have each other on their podcasts. In many cases, successful creators Dan and Erin would never have worked together if Erin was not as popular. She could create just as awesome content and their collaboration would be just as great, but Dan may not care about working together if Erin has a lesser audience. Obviously, there are exceptions where established creators find budding talent, but those are the vast minority. Even more common than collaborations are liking someone's post because they've given your content likes in the past. We'll be the first to admit that we already do this on social media. It's natural human behavior. They scratch your back so you scratch theirs.</p>
<p>Not only do humans form coalitions in our self-interest, but Simler and Hanson also assert that humans subconsciously justify their selfish intentions as cooperative. Dan says that he loves Erin's content but maybe he's just justifying collaborating with Erin to expose himself to Erin's huge audience. If Erin had a lesser audience would Dan still love her content? The same goes for votes. Does Dan vote for Erin's content because he truly likes her content, or does he do so just because Erin also votes for his content? It is impossible to tell.</p>
<p>Now let's look at how coalitions could undermine Steem. Whales, those with a lot of Steem tokens, have the most voting power because voting power is proportional to the number of tokens voters hold. Everyone should therefore want whales to vote for their content, and whales will naturally want other whales to vote on their content so they can remain the rich. Whales should, therefore, be incentivized to trade votes with other whales. The whales will argue they like the content of other whales. Because we cannot see the inner workings of their brains, it will impossible to know if they truly do, or if they are just justifying actions in their self-interest. Steem, therefore, seems to be a plutocracy, governance by the rich. To reiterate, people subconsciously justify selfish behavior as cooperative. We believe that technology should be built to work with human nature, not against it.</p>
<p>Vitalik Buterin also wrote a great piece <a href="https://vitalik.ca/general/2019/04/03/collusion.html">on collusion</a> that greatly influenced our thinking on this subject.</p>
<h2 id="subjectivegovernance">Subjective Governance</h2>
<p>The fundamental issues with subjective rewards protocols bring us to the larger problem of subjective governance in decentralized systems: there are no perfect solutions to subjective, distributed decision making because subjective consensus does not exist. The <a href="https://z.cash/">Zcash</a> privacy cryptocurrency recently <a href="https://messari.io/article/decentralizing-zcash">encountered</a> this issue with their development fund. Initially, the protocol was designed so that 20% of all block rewards would further fund the development of the project. The Founders Reward is set to end later this year, but there is still a lot of development work needed. They must upgrade their protocol in a way that will hopefully satisfy everyone so that the blockchain does not fork. At first glance, the simple solution would be a vote on it, but decentralized voting is much easier said than done. One-identity, one-vote systems cannot work in distributed systems because there is no way to differentiate between a vote tied to a real person and a vote generated by a bot.<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> Dishonest voters could create many identities to gain disproportionate control over the network.</p>
<p>More complex voting systems do not solve this issue either. Zcash founding scientist Eran Tromer <a href="https://forum.zcashcommunity.com/t/staked-poll-on-zcash-dev-fund-debate/34846/51">explains</a> there is an inherent trilemma between coin-weighted, permissionless, and whale-resistant voting systems. If a system is coin-weighted and permissionless—like Steem—then whales can have a disproportionate impact on the outcome, swaying the vote in their favor. You could try capping the number of funds someone can use, but then whales can split their funds among multiple wallets and gain the same voting power. You could create a coin-weighted and whale-resistant system by creating a permissioned vote where all identities are known. In that system, though, someone would inevitably be able to choose who can and cannot vote. In other words, such a voting system is centralized. It is also contrary to the ethos of a permissionless, privacy-focused blockchain.</p>
<p>Subjective consensus does not exist. There is no perfect solution to subjective, distributed decision making. Each has flaws that potentially allow some identities to gain disproportionate control over the vote. In Zcash's case, the singular decision of how to upgrade to the protocol is likely deemed a necessary one by the vast majority of members in the community and its outcome rather inconsequential to most, too. Most Zcash users probably will not care who further develops the protocol as long as it continues to function. But for protocols that must continuously make decisions as to what content and apps should be rewarded, it is a much larger problem.</p>
<h2 id="whereincentivesarecounterproductive">Where Incentives are Counterproductive</h2>
<p>Many may hear about the problems inherent to these reward protocols and think the answer must be to choose better metrics or to redesign the reward scheme. Well, that's at least exactly what we initially thought. But even if the perfect metric existed so that it is ungameable and everyone considered the reward scheme fair, would it necessarily work as intended? Would it lead to better quality content online and a healthy online ecosystem?</p>
<p>For many economists, it is a law of nature that rewards directly impact motivation, yet psychologists and behavioral economists believe it is more nuanced. Psychologists and behavioral economists often differentiate between intrinsic motivation, motivation for the activity in and of itself, and extrinsic motivation, doing an activity for an external reward. In many cases, extrinsic rewards like those in subjective rewards protocols may undermine intrinsic motivation for creating content and apps. If that's true, then subjective reward systems may be counterproductive, potentially leading to worse quality content.</p>
<p>Economists Roland Bénabou and Jean Tirole model how extrinsic rewards impact intrinsic motivation in their paper &quot;<a href="https://www.princeton.edu/~rbenabou/papers/RES2003.pdf">Intrinsic and Extrinsic Motivation</a>.&quot; Their findings generally agree with psychological research that promised rewards often undermine intrinsic motivation. Bénabou and Tirole argue that promised rewards signal a task is unattractive. The larger the promised reward, the more unattractive the task must be.</p>
<p>Psychologists John Condry and James Chambers provide a different explanation for why extrinsic motivation can counteract intrinsic motivation in &quot;Intrinsic Motivation and the Process of Learning.&quot;<sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup> The two studied how extrinsic rewards affect how adolescents learn and they discovered that when many people were promised rewards, they chose significantly easier tasks. They explain that rewards may shift people's focus from the activity itself to the reward they will receive after the activity.</p>
<p>Whatever the reason, promised rewards can undermine the intrinsic enjoyment of an activity. As a result, promising rewards to creators likely leads to more content but does not lead to better quality content. Promised rewards may even lead to worse quality content.</p>
<p>We have only focused on promised rewards so far, and Bénabou and Tirole additionally discuss <em>ex-post</em> rewards, rewards given unexpectedly at the full discretion of the giver. Unlike promised rewards which generally decrease intrinsic motivation, <em>ex-post</em> rewards generally increase it. They theorize that <em>ex-post</em> rewards signal that the work one did was valuable.</p>
<p>We consider subjective reward protocols to be in a gray area between promised and <em>ex-post</em>. If the developers or users creating on these protocols are purely honest, then the rewards are closer to discretionary. In the case of Steem, though, voting is of little cost to the voter, and thus not necessarily a signal the work is valuable. And as previously discussed, the metrics these protocols are gameable. If creators can pay vote bots on Steem then those rewards are more or less promised. If developers on Kin and Props can set up bots to send payments between different wallets, maximizing their number of in-app transactions, then those rewards also resemble promised ones. In both of those cases, we hypothesize that the ability to game the system will decrease intrinsic motivation, leading to worse quality content and apps.</p>
<p>Promising rewards do not necessarily create a healthy ecosystem for social media. We would, therefore, like to see next-generation social media market themselves as more than just social media with rewards. Networks with rewards are not 10x better networks that warrant users to leave existing platforms, and in many cases, we think they may be worse than traditional platforms that have little to no rewards for content creators.</p>
<p>We are personally advocates for paywalls and advertising. The revenue in both cases is entirely discretionary. We hope to create more opportunities for monetization so that more creators can realistically support themselves doing what they love.</p>
<h2 id="beyondalignedincentives">Beyond Aligned Incentives</h2>
<p>There is much more to designing next-generation social media than just aligning incentives. Social media platforms are not significantly better just because they reward creators and developers, and in many cases, we believe they may be worse. Because it is often impossible to reach consensus on subjective matters, subjective reward protocols must use gameable metrics. Additionally, these reward protocols ignore our natural inclination to form coalitions which would undermine some of these protocols.</p>
<p>We personally advocate that networks and governance systems should work with human nature, not against it. We believe that next-generation social media should not be drop in replacements for sites like Facebook, YouTube, and Medium. Instead, they should provide opportunities that these sites cannot.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>See '<a href="https://www.microsoft.com/en-us/research/publication/the-sybil-attack/">The Sybil Attack</a>' for the proof behind this claim. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Their paper is in <a href="https://www.taylorfrancis.com/books/e/9781315666983"><em>The Hidden Costs of Reward: New Perspectives on the Psychology of Human Motivation</em></a>. <a href="#fnref2" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Realigning Online Incentives]]></title><description><![CDATA[An overview of the misaligned incentives inherent to the web right now and some of the projects trying to better align them.]]></description><link>https://www.outpost-protocol.com/blog/realigning-online-incentives/</link><guid isPermaLink="false">5e8d23eaf9111c001edf174a</guid><dc:creator><![CDATA[Sam Hatem]]></dc:creator><pubDate>Wed, 08 Apr 2020 01:13:05 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p><em>Originally published on <a href="https://hackernoon.com/realigning-online-incentives-jjp134hn">Hacker Noon</a>. The original had room for improvement<sup class="footnote-ref"><a href="#fn1" id="fnref1">[1]</a></sup> so I added a bunch of evidence and revised my points. The initial version was published with help from <a href="https://twitter.com/_kabessa">Nir Kabessa</a> and Professor Dan Rubenstein.</em></p>
<p>When Larry Page and Sergey Brin received funding for Google in 1998, they had to figure out how their academic project could make money. Initially the two were against advertising because advertising would incentivize displaying paid advertisements over quality searches. In a 1998 <a href="https://www.sciencedirect.com/science/article/abs/pii/S1389128612003611">paper</a> they even argue, “we believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” But later that year, Page and Brin would decide to experiment with small ads on their platform.</p>
<p>Fast forward to 2019 and Google <a href="https://www.vox.com/2019/2/20/18232433/digital-advertising-facebook-google-growth-tv-print-emarketer-2019">has</a> over a third of the digital advertising market share, followed by Facebook with over 20%. But there is a fundamental problem with the advertisement business model: the mixed incentives that Page and Brin identified when starting Google. Users want useful services and quality content, but advertising platforms want users to view ads. This issue is prevalent seemingly everywhere online, but now many projects are trying to build solutions to help realign incentives around user data and content creation.</p>
<h2 id="attentionanddataextraction">Attention and Data Extraction</h2>
<p>Platforms like Facebook, YouTube, and Instagram want to maximize ad revenue so they try to maximize user attention and user data. More attention so their ad space is worth more, and more data to improve their machine learning algorithms to generate more attention. But while more attention does lead to more ad revenue, it does not mean that users like the content shown in their algorithmic news feed and their recommendation sidebars. These machine learning algorithms have revealed that humans are naturally distracted by sensationalist and provocative content. Mark Zuckerberg <a href="https://www.facebook.com/notes/mark-zuckerberg/a-blueprint-for-content-governance-and-enforcement/10156443129621634/">acknowledged</a> this problem on Facebook (after activists like <a href="https://www.ted.com/talks/tristan_harris_how_a_handful_of_tech_companies_control_billions_of_minds_every_day/transcript?language=en">Tristan Harris</a> started raising awareness about it); referring to this content, Zuckerberg says, “people will engage with it more on average — even when they tell us afterwards they don’t like the content.&quot; Zuck recognizes that people do not necessarily like content that grabs their attention. The fundamental issue with  these algorithms is they rate content based on metrics, most notably time spent viewing it. They are great at maximizing these metrics, yet they do so at a cost to their user experience.</p>
<p>Additionally, the data these sites hoard makes them prime targets for hacks. All Facebook data is hosted on Facebook’s servers which makes it really easy for say <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election">Cambridge Analytica</a> to access millions of users’ data, or even for one hacker to <a href="https://www.cnn.com/2019/07/29/business/capital-one-data-breach/index.html">access</a> over 100 million credit cards. These events are rare but completely unpredictable. Over time the security of these companies should improve, making these hacks rarer and rarer, but the amount of data these organizations stockpile will also grow. Resultantly these hacks will also get bigger and bigger.</p>
<p>These hacks should be seen as <a href="https://en.wikipedia.org/wiki/Black_swan_theory">white swan events</a>, events that are inevitable but blamed on poor human judgment. They have been and will continue to be blamed on slight flaws that have been overlooked. But instead, we should recognize the inherent fragility in storing user data together, a design flaw big tech has so far failed to adapt to. And why should Facebook, Google and other big tech store data more robustly? For them this is a perfect situation. They and only they have the upside potential of their user data, yet have little of the downside risk of these hacks. They own their users' data for their sole use in their algorithms, yet are not affected by the hacks because it is not their data. The downside risk is pushed to their users. It is only the user's data that is leaked.</p>
<h2 id="buildingdecentralizedalternatives">Building Decentralized Alternatives</h2>
<p>Mistrust in Facebook and social media in general increased greatly after the Cambridge Analytica scandal. Many joined the ‘Delete Facebook’ movement, but because users did not have any viable alternatives, it was largely <a href="https://thenextweb.com/contributors/2018/03/28/data-shows-didnt-deletefacebook-make-sure-change-settings/">unsuccessful</a>. Now many are calling to break up big tech companies, including Facebook co-founder <a href="https://www.nytimes.com/2019/05/09/opinion/sunday/chris-hughes-facebook-zuckerberg.html">Chris Hughes</a>. Breaking up big tech may lead to more competition in the short term, but it is unlikely to have any effect on the underlying problem. Platforms will still maximize user data and attention to increase the value of their advertising space.</p>
<p>Instead, the solution should be to realign incentives online by creating decentralized alternatives to traditional networks. Why must they (likely) be decentralized? As Chris Dixon notes in his article ‘<a href="https://medium.com/s/story/why-decentralization-matters-5e3f79f7638e">Why Decentralization Matters</a>,’ a fundamental issue with centralized platforms is that once they saturate their market, the only way for them to continue growing is to extract more from their existing users. Centralized platforms inevitably become zero-sum, but decentralized protocols remain positive-sum. Anyone can continue to build services on top of them without fear that their ability to build will be taken away, or create content without fear that it will be censored. Thus realigning online incentives may require decentralized protocols.</p>
<h3 id="data">Data</h3>
<p>Marc Andreessen <a href="https://a16z.com/2019/08/29/internet-past-crypto-future-crypto-regulatory-summit/">is</a> among many believers that blockchain technology could help realign incentives between platforms and users. For one, blockchains allow for digital ownership of data. Instead of allowing centralized platforms to control their data, users could control and store it themselves. Instead of just hacking into a single server that stores data for millions of users, hackers would now need to individually hack millions of people to get the same amount of data. It is a much more robust system to store data. There are several projects currently building these decentralized storage lockers. <a href="https://3box.io/">3Box</a>, <a href="https://solid.inrupt.com/">Solid</a><sup class="footnote-ref"><a href="#fn2" id="fnref2">[2]</a></sup>, and <a href="https://blockstack.org/">Blockstack</a> are 3 of the biggest projects that you can use right now (with minimal functionality).</p>
<p>Blockstack founder Muneed Ali <a href="https://medium.com/@muneeb/the-next-wave-of-computing-743295b4bc73">compares</a> the shift from centralized to personal storage like the shift from mainframes to desktop computers during the 1980s and 1990s. Like the shift to desktops, the shift to personal data likely won't seem like a big deal a priori, but it should create new opportunities for users and developers. With user-controlled data, multiple applications may be able to use the same user data in ways that were not previously possible online. Imagine if Facebook opened up its backend so that developers could create alternative user experiences with the same data. Additionally, users may be able to eventually sell their data directly to companies that want it, essentially giving users universal basic income.</p>
<h3 id="monetization">Monetization</h3>
<p>The next key to realigning incentives is allowing individual users to be rewarded for the data and the content they are creating online. As Jaron Lanier and E. Glen Weyl discuss in <a href="https://hbr.org/2018/09/a-blueprint-for-a-better-digital-society">A Blueprint for a Better Digital Society</a>, people's data is super valuable to Facebook's, Google's, and other big tech companies' machine learning algorithms. People should be able to sell their data instead of handing it over in exchange for free services. But because individuals alone have no negotiating power, Lanier and Weyl propose an abstract organization called a mediator of individual data (MID). They do not propose any implementations of MIDs but they do give a few guidelines:</p>
<ol>
<li>No incentives to abuse member privacy and trust.</li>
<li>Medium-sized. If they are too small, they won't have much bargaining power. If they are too large, they will just become the powerful organizations they were meant to stop.</li>
<li>Data should not be permanently sold. It should only be licensed for defined purposes.</li>
<li>Their benefits should be shared among members.</li>
<li>Individuals need to be able to understand the terms and conditions of the MIDs they join.</li>
<li>Longevity. MIDs cannot realistically last forever, but they should be designed to last longer than a lifetime.</li>
</ol>
<p>It will be a challenge to combine personal ownership of data with Lanier and Weyl's idea of a MID. And a successful solution to directly monetize user data may have not personal ownership of data nor satisfy Lanier and Weyl's constraints.</p>
<p>In addition to rewarding individuals for their data, content creators should also be rewarded for the value that they bring to social media platforms. Many projects aim to reward users for the content including <a href="https://steem.com/steem-whitepaper.pdf">Steemit</a>, <a href="https://dlive.tv/">Dlive</a>, <a href="https://voice.com/">Voice</a>, and <a href="https://socialx.network/">SocialX</a>. In many of these protocols, rewards are distributed based on users' votes so for reward distribution to be fair, they need to accurately reflect what everyone thinks. But unfortunately, there is no consensus on subjective opinions. If there was then there would be some election system such that everyone is always happy regarding its outcome. Imagine if everyone was happy with the outcome of a presidential election. <a href="https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem">Arrow's impossibility theorem</a> disproves the possibility of such a system. Inevitably some people will not agree with the outcome of a subjective vote. Stated simply, there is no one-size-fits-all subjective voting system.</p>
<p>A related problem with subjective voting systems: how can you stop identities from colluding? First, you would need to be able to detect collusion, but how can you differentiate between a vote for a friend that is collusion and one that is not? Let's say that Bob is friends with Alice offline so he votes on most of her content. Maybe Bob is friends with Alice because he thinks she has awesome content, or maybe Bob is voting on her content because they are friends offline and Alice also votes for Bob's content. Because we cannot see into Bob's mind, it's impossible to know if he truly likes Alice's content or if he just likes Alice.<sup class="footnote-ref"><a href="#fn3" id="fnref3">[3]</a></sup>. Vitalik Buterin also wrote a great piece <a href="https://vitalik.ca/general/2019/04/03/collusion.html">on collusion</a> that covers this issue with subjective voting protocols far more in-depth.</p>
<p>More practical than subjective reward protocols is decentralized access control. <a href="https://unlock-protocol.com/">Unlock Protocol</a> is building such a system. If anyone can easily set up a paywall around their content, then more content creators will be able to charge for premium content. While allowing anyone to easily set up a paywall is a step in the right direction, it does not seem a suitable substitute for social media platforms, because there's no society. It's just one content creator monetizing their work. We believe that next-generation social media should ideally allow a whole community of creators to both support each other and to make money with their content.</p>
<h2 id="thefutureofweb3">The Future of Web3</h2>
<p>Decentralized storage and monetization protocols may have the potential to solve the issues with the existing, centralized internet companies. They also have the potential to increase the rate of innovation online. As previously stated, centralized platforms inevitably become zero-sum, while decentralized protocols remain positive-sum. All developers can continue to build on decentralized systems, while they cannot on their centralized alternatives.</p>
<p>If decentralized platforms outcompete their centralized alternatives, a good parallel may be how desktop computers disrupted the computer industry in the 1980s. As previously stated, Muneeb Ali compares digital ownership of data to desktop computers and Marc Andreessen has similarly <a href="https://dealbook.nytimes.com/2014/01/21/why-bitcoin-matters/">compared</a> the development of bitcoin to that of desktops. When everyone was using mainframes in the 1960s and 1970s, IBM was the biggest computer company. Then IBM fell behind when they didn’t start selling desktops. Creating desktops would have taken a sizeable amount of time and money. It would not have made sense for IBM to lose money in the short term for an unproven product. But then Apple came along and built one of the first successful desktops in 1976. Desktops were ultimately successful and gave users more control like how decentralized networks could give users control over their data and content.  But unlike Apple who is able to capture all the value of the computers they sell, the value captured by decentralized systems will be, well, decentralized. Successful decentralized systems will benefit everyone in the network, not just the company that created them.</p>
<hr class="footnotes-sep">
<section class="footnotes">
<ol class="footnotes-list">
<li id="fn1" class="footnote-item"><p>It's really bad. <a href="#fnref1" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn2" class="footnote-item"><p>Solid doesn't actually use a blockchain but it's creating the same technology so I mention it. <a href="#fnref2" class="footnote-backref">↩︎</a></p>
</li>
<li id="fn3" class="footnote-item"><p>It’s even impossible to prove our own motives. Maybe with a better understanding of the brain in the future we will be able to analyze all the information someone is using to make a decision, but as Kevin Simler and Robin Hanson explain in <em>The Elephant in the Brain</em> consciousness often only rationalizes what we do. <a href="#fnref3" class="footnote-backref">↩︎</a></p>
</li>
</ol>
</section>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>