Coming Soon:

Now Available: Volumes I, II, III, and IV of the Collected Published and Unpublished Papers.

NOW AVAILABLE ON YOUTUBE: LECTURES ON KANT'S CRITIQUE OF PURE REASON. To view the lectures, go to YouTube and search for "Robert Paul Wolff Kant." There they will be.

NOW AVAILABLE ON YOUTUBE: LECTURES ON THE THOUGHT OF KARL MARX. To view the lectures, go to YouTube and search for Robert Paul Wolff Marx."

Total Pageviews

Saturday, December 29, 2018


OK, these two responses really help.  Jerry's spells out how someone can buy access for an ad or other material to a targeted selection of users.  Presumably the data FaceBook uses to comply with Jerry's purchase is proprietary.  They own that data, and they sell a subset of it to Jerry for a fee, or rather he gets to use a subset of it for a fee.  I assume Jerry has to have a checkable identity to buy the data, which explains the need for that poor shlub who got busted by Mueller for selling phony on-line IDs to whatever that company's name was.

Which brings me to Dean the Librarian.  S/He talks about 'bots, which I assume is short  for robots.  Does whoever controls and launches the 'bots [the Sorcerer's Apprentice, in the great old movie Fantasia] need to buy data from FaceBook?  How else can the controller shape the audience?  

I am getting closer to understanding, but I am not there yet.

Let me explain the hunch, or intuition, behind all of this.  It occurred to me that maybe the FaceBook executives could quite easily control the use of their platform, contrary to what they said to Congress, but that it would cost them money in lost revenues to do so.  Is that so?


Dean said...

Now I'm wandering well out of my comfort zone. I don't use FB or Twitter, though I read public feeds (Glenn Greenwald's and Corey Robin's, to be precise) on the latter. Yes, 'bots (I prefer the apostrophe) are non-human "users" of these services disguised as human users, i.e., they operate automatically, and therefore they can generate more messages and iterations of messages than an ordinary person. I can't answer the posed question asking whether or not they buy data, but I don't believe they need to. On Twitter (and Instagram, Facebook, etc.), for example, hashtags identify topics (e.g., #election2016, #marxism, etc.) that users can follow. In other words, users self-identify as being interested in particular topics. The launchers of 'bots accumulate those topics/hashtags, push their messages so tagged, and a ready and willing audience is available to consume them.

Again, I'm not entirely comfortable with my own conceptualization here.

Anonymous said...

Facebook executives could control their platform. And doing so would result in lost revenues. But not for the reason I believe you are suggesting, Dr. Wolff.

If I understand you correctly, your intuition is that botters pay Facebook for data, thus generating revenues for Facebook directly - and botters drive clicks and views for Facebook users sympathetic to the bot's messaging, which further increases the value of Facebook as an advertising platform to other advertisers. So by controlling bot use on the platform, Facebook waives direct revenues from the botters, and indirect revenues from other advertisers for whom botters generate web traffic.

Although the foregoing is true, it is does not represent the real risk to Facebook's revenues attributable to inceeased executive control of the content on the platform. The true risk to Facebook's revenues resulting from increased oversight or control of the platform is that once Facebook crosses a certain line with respect to censorsing the content on its platform, a critical mass of users will abandon the platform in the name of "free speech" and seek an un-censored alternative.

This goes back to one of the founding principles of the internet - the free and open sharing and exchange of information. Facebook's primary value to advertisers is its breadth and pervasiveness - since crushing Myspace, Facebook has become akin to a public utility monopoly - if you want to connect with just about anybody in the world on one social media platform, from work colleagues to niche enthusiasts to your parents, Facebook is the only game in town. But the internet as an industry is far more fickle than sewage or electricity.

Facebook fears that once it starts restricting content to the point where everyday users notice as much, the loudest, most aggressive and ideological corners of the internet will abandon the platform altogether. When this happens, it will weaken the ubiquity of the platform. And when Facebook is less ubiquitous, fewer people will feel the need to use it, which will cause more people to leave, and result in a revenue and user-base death spiral.

I recommend that you google "myspace" or "America Online" to learn how quickly this can happen.

Robert Paul Wolff said...

Fascinating. Thank you. Little by little, I am entering the modern era.

Dean said...

In addition to the incentive Anonymous describes @3:20 PM, there is a legal reason for FB not to appear to be curators of the content posted by its users. Generally speaking, FB enjoys a degree of protection against liability for illegal postings by the users. Think copyright or trademark infringement, or defamatory statements. Federal law to some extent immunizes an online service provider against these risks so long as the provide merely transmits and stores the content. (There are lots of exceptions and qualifications, but you get the picture.) A provider who initiates or modifies content will not be immunized. FB, Twitter, ISPs, and the like do not want to be perceived in legal terms as publishers or editors. They want to be viewed as platforms for more or less unfettered user interactions, as Anonymous suggests.

Dean said...

Just in, a story in The Nation reporting on studies that find relatively little evidence of real influence on the 2016 election by Russian hackers/'bots/trolls:

Additional points: first, I have a bias against the likelihood of significant effects due to Russian meddling, and this Nation story confirms it. I am generally very wary of claims about technological advances, which tend to be hyped and often implausible. Second, I haven't read the studies reported in the Nation piece, so I have no idea how accurate the story is. Third, I learned about this story on Glenn Greenwald's Twitter feed. Hence, this very comment illustrates how information (accurate? reliable? biased?) that appears in social networks can propagate and drive discussion about controversial topics. Finally, that the IRA work might have been largely ineffective doesn't minimize the importance of claims, if proven, that Trump's campaign paid for Russian assistance via social networking tools. His campaign would have been no more immune than anybody else to tech hype.

formerly a wage slave said...

Tony Norfield has done an analysis of how the tech giants operate

You might find it useful. He also recently attended a conference on these technologies, and somewhere on his blog he links to it. (Sorry, but Ijust cannot keep up with everything.)

also there is this