Wednesday, October 19, 2016

Orange TRUMPeter Swans: When What You Know Ain't So

Was Donald J. Trump's political rise in 2015-2016 a "black swan" event?  "Yes" is the answer asserted by Jack Shafer this Politico article. "No" is the answer from other writers, including David Atkins in this article on the Washington Monthly Political Animal Blog.

Orange Swan
My answer is "Yes", but not in the same way that other events are Black Swans.   Orange Swans like the Trump phenomenon is fits this aphorism:
"It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so." -- attributed to Mark Twain
In other words, the signature characteristic of Orange Swans is delusion.

Rethinking "Black Swans"

As I have mentioned at the start of this series, the "Black Swan event" metaphor is a conceptual mess. (This post is sixth in the series "Think You Understand Black Swans? Think Again".)

It doesn't make sense to label any set of events as "Black Swans".  It's not the events themselves, but instead they are processes that involve generating mechanisms, our evidence about them, and our method of reasoning that make them unexpected and surprising.

Tuesday, June 21, 2016

Public Statement to the Commission on Enhancing National Cybersecurity, 6-21-2016

[Submitted in writing at this meeting. An informal 5 min. version was presented during the public comment period. This statement is my own and does not represent the views or interests of my employer.]


Cyber security desperately needs institutional innovation, especially involving incentives and metrics.  Nearly every report since 2003 has included recommendations to do more R&D on incentives and metrics, but progress has been slow and inadequate.


Because we have the wrong model for research and development (R&D) on institutions.

My primary recommendation is that the Commission’s report should promote new R&D models for institutional innovation.  We can learn from examples in other fields, including sustainability, public health, financial services, and energy.

What are Institutions and Institutional Innovation?

Institutions are norms, rules, and social structures that enable society to function. Examples include marriage, consumer credit reporting and scoring, and emissions credit markets.

Cyber security[1] has institutions today, but many are inadequate, dysfunctional, or missing.  Examples:
  1. overlapping “checklists + audits”; 
  2. professional certifications; 
  3. post-breach protection for consumers (e.g. credit monitoring); 
  4. lists of “best practices” that have never been tested or validated as “best” and therefore are no better than folklore.  

There is plenty of talk about “standards”,  “information sharing”, “public-private partnerships”, and “trusted third parties”, but these remain mostly talking points and not realities.

Institutional innovation is a set of processes that either change existing institutions in fundamental ways or create new institutions.   Sometimes this happens with concerted effort by “institutional entrepreneurs”, and other times it happens through indirect and emergent mechanisms, including chance and “happy accidents”.

Institutional innovation takes a long time – typically ten to fifty years.

Institutional innovation works different from technological innovation, which we do well.  In contrast, we have poor understanding of institutional innovation, especially on how to accelerate it or achieve specific goals.

Finally, institutions and institutional innovation should not be confused with “policy”.  Changes to government policy may be an element of institutional innovation, but they do not encompass the main elements – people, processes, technology, organizations, and culture.

The Need: New Models of Innovation

Through my studies, I have come to believe that institutional innovation is much more complicated  [2] than technological innovation.   It is almost never a linear process from theory to practice with clearly defined stages.

There is no single best model for institutional innovation.  There needs to be creativity in “who leads”, “who follows”, and “when”.  The normal roles of government, academics, industry, and civil society organizations may be reversed or otherwise radically redrawn.

Techniques are different, too. It can be orchestrated as a “messy” design process [3].  Fruitful institutional innovation in cyber security might involve some of these:
  • “Skunk Works”
  • Rapid prototyping and pilot tests
  • Proof of Concept demonstrations
  • Bricolage[4]  and exaptation[5]
  • Simulations or table-top exercises
  • Multi-stakeholder engagement processes
  • Competitions and contests
  • Crowd-sourced innovation (e.g. “hackathons” and open source software development)

What all of these have in common is that they produce something that can be tested and can support learning.  They are more than talking and consensus meetings.

There are several academic fields that can contribute defining and analyzing new innovation models, including Institutional Sociology, Institutional Economics, Sociology of Innovation, Design Thinking, and the Science of Science Policy.

Role Models

To identify and test alternative innovation models, we can learn from institutional innovation successes and failures in other fields, including:
  • Common resource management (sustainability)
  • Epidemiology data collection and analysis (public health)
  • Crash and disaster investigation and reporting (safety)
  • Micro-lending and peer-to-peer lending (financial services)
  • Emissions credit markets and carbon offsets (energy)
  • Open software development (technology)
  • Disaster recovery and response[6]  (homeland security)

In fact, there would be great benefit if there were a joint R&D initiative for institutional innovation that could apply to these other fields as well as cyber security.  Furthermore, there would be benefit making this an international effort, not just limited to the United States.


[1] "Cyber security" includes information security, digital privacy, digital identity, digital information property, digital civil rights, and digital homeland & national defense.
[2] For case studies and theory, see: Padgett, J. F., & Powell, W. W. (2012). The Emergence of Organizations and Markets. Princeton, NJ: Princeton University Press.
[3] Ostrom, E. (2009). Understanding Institutional Diversity. Princeton, NJ: Princeton University Press.
[4] “something constructed or created from a diverse range of available things.”
[5]  “a trait that has been co-opted for a use other than the one for which natural selection has built it.”
[6] See: Auerswald, P. E., Branscomb, L. M., Porte, T. M. L., & Michel-Kerjan, E. O. (2006). Seeds of Disaster, Roots of Response: How Private Action Can Reduce Public Vulnerability. Cambridge University Press.

Thursday, April 28, 2016

Wednesday, March 30, 2016

#Tay Twist: @Tayandyou Twitter Account Was Hijacked ...By Bungling Microsoft Test Engineers (Mar. 30)

[Update 5:35am  From CNBC
Microsoft's artificial intelligence (AI) program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages. However, the program once again went wrong and Tay's account was set to private after it began repeating the same message over and over to other Twitter users. According to a Microsoft, the account was reactivated by accident during testing.
"Tay remains offline while we make adjustments," a spokesperson for the company told CNBC via email. "As part of testing, she was inadvertently activated on Twitter for a brief period of time." (emphasis added)
I'm puzzled by this explanation but I'll go back through the evidence to see which explanation is best supported.]

[Update 6:35am  It now looks like the "account hack" was really a bungled test session by someone at Microsoft Research -- effectively a "self-hack".

Important: This episode was not "Tay being Tay".]

The @Tayandyou Twitter chatbot has been silent since last Thursday when Microsoft shut it down. Shortly after midnight today, Pacific time, the @Tayandyou Twitter account woke up and started blasting tweets at very high volume.  All of these tweets included other Twitter handles in them, maybe from previous tweets, maybe from followers.

But it became immediately apparent that something was different and wrong.  These tweets didn't look anything like the ones before, in style, structure, or sentience.  From the tweet conversations and from the sequence of events, I believe that the @Tayandyou account was hacked today (March 30), and was active for 15 minutes, sending over 4,200 tweets.

[Update 4:30am
The online media has started posting articles, but they all treat this as more "Tay runs amok".  Only The Verge has updated their story.  If you read an article that doesn't at least consider that Tay's Twitter account was hacked, could you please add a comment with link to this post?  Thanks.]

Tuesday, March 29, 2016

Media Coverage of #TayFail Was "All Foam, No Beer"

One of the most surprising things I've discovered in the course of investigating and reporting on Microsoft's Tay chatbot is how the rest of the media (traditional and online) have covered it, and how the digital media works in general.

None of the articles in major media included any investigation or research.  None.  Let that sink in.

All foam, no beer.

Sunday, March 27, 2016

Microsoft's Tay Has No AI

(This is the third of three posts about Tay. Previous posts: "Poor Software QA..." and "...Smoking Gun...")

While nearly all the press about Microsoft's Twitter chatbot Tay (@Tayandyou) is about artificial intelligence (AI) and how AI can be poisoned by trolling users, there is a more disturbing possibility:

  • There is no AI (worthy of the name) in Tay. (probably)

I say "probably" because the evidence is strong but not conclusive and the Microsoft Research team has not publicly revealed their architecture or methods.  But I'm willing to bet on it.

Evidence comes from three places. First is from observing a small non-random sample of Tay tweet and direct message sessions (posted by various users). Second is circumstantial, from composition of the team behind Tay. Third piece of evidence is from a person who claims to have worked at Microsoft Research on Tay until June 2015.  He/she made two comments to my first post, but unfortunately deleted the second comment which had lots of details.

Saturday, March 26, 2016

Microsoft #TAYFAIL Smoking Gun: ALICE Open Source AI Library and AIML

[Update 3/27/16: see also the next post: Microsoft's Tay has no AI"]

As follow up to my previous post on Microsoft's Tay Twitter chatbot (@Tayandyou), I found evidence of where the "repeat after me" hidden feature came from.  Credit goes to SSHX for this lead in his comment:
"This was a feature of AIML bots as well, that were popular in 'chatrooms' way back in the late 90's. You could ask questions with AIML tags and the bots would automatically start spewing source into the room and flooding it. Proud to say I did get banned from a lot of places."
A quick web search revealed great evidence. First, some context.

AIML is acronym for "Artificial Intelligence Markup Language", which "is an XML-compliant language that's easy to learn, and makes it possible for you to begin customizing an Alicebot or creating one from scratch within minutes."  ALICE is acronym for "Artificial Linguistic Internet Computer Entity".  ALICE is free natural language artificial intelligence chat robot.


This Github page has a set of AIML statements staring with "R". (This is a fork of "9/26/2001 ALICE", so there are probably some differences between Base ALICE today.)  Here are two statements matching "REPEAT AFTER ME" and "REPEAT THIS".

Snippet of AIML statements with "REPEAT AFTER ME" AND "REPEAT THIS"
(click to enlarge)
As it happens, there is an interactive web page with Base ALICE here. (Try it out yourself.) Here is what happened when I entered "repeat after me" and also "repeat this...":

In Base ALICE, the template response to "repeat after me" is "...".  In other words, NOP ("no operation").  This is different from the AIML statement, above, which is ".....Seriously....Lets have a conversation and not play word games.....".  Looks like someone just deleted the text following three periods.

But the template response to "repeat this X" is "X" (in quotes), which is consistent with the AIML statement, above.


From this evidence, I infer that Microsoft's Tay chatbot is using the open-sourced ALICE library (or similar AIML library) to implement rule-based behavior.  Though they did implement some rules to thwart trolls (e.g. gamergate), they left in other rules from previous versions of ALICE (either Base ALICE or some forked versions).

My assertion about root cause stands: poor QA process on the ALICE rule set allowed the "repeat after me" feature to stay in, when it should have been removed or modified significantly.

Another inference is that "repeat after me" is probably not the only "hidden feature" in AIML rules that could have caused misbehavior.  It was just the one that the trolls stumbled upon and exploited.  Someone with access to Base ALICE rules and also variants could have exploited these other vulnerabilities.