The Grids of Nobel (Medial Temporal Lobe-rific)

This content will be cross-posted to Synthetic Daisies.

new-banner

This year’s Nobel Prize in Physiology and Medicine went to John O’Keefe, May-Britt Moser, and Edvard I. Moser for their work on the neurophysiology of spatial navigation [1]. The prize was awarded “for their discoveries of cells that constitute a positioning system in the brain”. Some commentators have referred to these discoveries as constituting an “inner GPS system“, although this description is technically and conceptually incorrect (as I will soon highlight). As a PhD student with an interest in spatial cognition, I read (with enthusiasm!) the place cell literature and the first papers on grid cells [2]. So upon hearing they had won, I actually recognized their names and contributions. While recognition of the grid cell discovery might seem to be premature (the original discovery was made in 2005), the creation of iPS cells (the subject of the 2012 award) only dates to 2007.

John O’Keefe is a pioneer in the area of place cells, which provided a sound neurophysiological basis for understanding how spatial cognitive mechanisms are tied to their environmental context. The Mosers [3] went a step further with this framework, discovering a type of cell that provides the basis for a metric space (or perhaps more accurately, a tiling) to which place cell and other location-specific information are tied. The intersection points on this grid are represented by the aptly-named grid cells. Together, these types of cells provide a mental model of the external world in the medial temporal lobe of mammals.

Locations to which grid cells respond most strongly.

Place cells (of which there are several different types) are small cell populations in the CA1 and CA3 fields of the Hippocampus that encode a memory for the location of objects [4]. Place cells have receptive fields which represent specific locations in space. In this case, a cell’s receptive field corresponds to locations and orientations to which the cell responds most strongly. When the organism is located in (or approaches) one of these receptive fields, the local field potential of the cell population is activated at a maximum of 20Hz. As place cells are in the memory encoding center of the brain, place cells respond vigorously when an animal passes or gets near a recognized location. Grid cells, located in the entorhinal cortex, serve a distinct but related role to that of place cells. While spatial cognition involves many different types of input (from multisensory to attentional), place cells and grid cells are specialized as a mechanism for location-specific memory.

Variations on a grid in urban layouts. COURTESY: Athenee: the rise and fall of automobile culture.

How do we know this part of the brain is responsible for such representations. Both place and grid cells have been confirmed through electrophysiological recordings. In the case of place cells, lesions studies [5] have been conducted to demonstrate behavioral deficits during naturalistic behavior. In [5], lesions (made via lesion studies) of hippocampal tissue results in deficits in spatial memory and exploratory behavior. In humans, the virtual Morris Water Maze [6] can be used to assess performance with regard to finding a specific landmark (in this case, a partially-submerged platform) embedded in a virtual scene. The recall of a particular location is contingent on people’s ability to a) find a location relative to other landmarks, and b) people’s ability to successfully rotate their mental model of a particular space.

An example of learning in rats during the Morris Water Maze task. COURTESY: Nutrition, Neurogenesis and Mental Health Laboratory, King’s College London.

As a relatively recent discovery, grid cells provide a framework for a geometric (e.g. Euclidean) representation of space. Like place cells, the activity of grid cells are dependent upon the behavior of animals in a spatial context. Yet grid cells help to provide a larger context for spatial behavior, namely the interstitial space between landmarks. This allows for both the creation and recognition of patterns at the landscape spatial scale. Street patterns in urban settlements that form grids and wheel-and-spoke patterns are no accident — it is the default way in which humans organize space.

An anatomical and functional view of the medial temporal lobe. COURTESY: Figure 1 in [7].

There are some interesting but unexplored relationships between physical movement and spatial navigation which both involve a coordinate system for the world that surrounds a given organism. For example, goal-directed arm movements occur within a multimodal spatial reference frame that involves the coordination of visual and touch information [8]. While limb movement and walking involve timing mechanisms associated with the motor cortex and cerebellum, there are implicit aspects of spatial memory in movement, particularly over long distances and periods of time. There is an emerging field called movement ecology [9] which deals with these complex interconnections.

Another topic that falls into this intersection is path integration [10]. Like the functions that involve place and grid cells, path integration also involves the medial temporal lobe. Path integration is the homing ability of animals that results from an odometer function — the brain keeps track of footsteps and angular turns in order to generate an abstract map of the environment. This information is then used to return to a nest or home territory. Path integration has been the basis for digital evolution studies on the evolutionary origins of spatial cognition [11], and might be more generally useful in understanding the relationships between the evolutionary conservation of spatial memory and its deployment in virtual environments and city streets. While this is closer to the definition of an “inner GPS system”, there is so much more to this fascinating neurophysiological system.

NOTES:

[1] Nobel Prize Committee: The Nobel Prize in Physiology or Medicine 2014. Nobelprize.org, Nobel Media AB. October 6 (2014).

[2] Hafting, T., Fyhn, M., Molden, S., Moser, M-B., and Moser, E.I.   Microstructure of a spatial map in the entorhinal cortex. Nature, 436(7052), 801–806 (2005).

[3] Moser, E.I., Kropff, E., and Moser, M-B.   Place Cells, Grid Cells, and the Brain’s Spatial Representation System. Annual Review of Neuroscience, 31, 69-89 (2008).

[4] O’Keefe, J. and Nadel, L.   The Hippocampus as a Cognitive Map. Oxford University Press (1978).

[5] For the original Morris Water Maze paper: O’Keefe, R.G., Garrud, P., Rawlins, J.N., and O’Keefe, J.   Place navigation impaired in rats with hippocampal lesions. Nature, 297(5868), 681–683 (1982).

[6] For the virtual adaptation of the water maze for humans, please see: Astur, R.S., Taylor, L.B., Mamelak, A.N,, Philpott, L., and Sutherland, R.J.   Humans with hippocampus damage display severe spatial memory impairments in a virtual Morris water task. Behavioral Brain Research, 132, 77–84 (2002).

[7] Bizon, J.L. and Gallagher, M.   More is less: neurogenesis and age-related cognitive decline in Long-Evans rats. Science of Aging, Knowledge, and Environment, (7), re2 (2005).

[8] Shadmehr, R. and Wise, S.P.   The Computational Neurobiology of Reaching and Pointing. MIT Press, Cambridge, MA (2005).

[9] Nathan, R.   An emerging movement ecology paradigm. PNAS, 105(49), 19050–19051 (2008).

[10] McNaughton, B.L., Battaglia, F.P., Jensen, O., Moser, E.I., and Moser, M-B.   Path integration and the neural basis of the ‘cognitive map’. Nature Reviews Neuroscience, 7, 663-678 (2006).

[11] Grabowski, L.M., Bryson, D.M., Dyer, F.C., Pennock, R.T., and Ofria, C.   A case study of the de novo evolution of a complex odometric behavior in digital organisms. PLoS One, 8(4), e60466 (2013) AND Jacobs, L.F., Gaulin, S.J., Sherry, D.F., and Hoffman, G.E.   Evolution of spatial cognition: sex-specific patterns of spatial behavior predict hippocampal size. PNAS, 87(16), 6349-6352 (1990).

Fun with F1000: publish it and the peers will come

This content will be cross-posted to Synthetic Daisies. Please also see the update before the notes section.

new-banner

For the last several months, I have been working on a paper called “Animal-oriented Virtual Environments: illusion, dilation, and discovery” [1] that is now published at F1000 Research (also available as a pre-print at PeerJ). This is a paper that has gone through several iterations, from a short 1800-word piece (first draft) to a full-length article. This includes several stages of editor-driven peer review [2], and took approximately nine months. Because of its speculative nature, this paper could be an excellent candidate for testing out this review method.

The paper is now live at F1000 Research.

Evolution of a research paper. The manuscript has been hosted at PeerJ Preprints since Draft 2.

F1000 Research uses a method of peer-review called post-publication peer review. For those who are not aware, F1000 approaches peer-review in two steps: the submission and approval by an editor stage, and the publication and review by selected peer stage. Let’s walk through these.

The first step is to submit an article. For some articles (data-driven), they are published to the website immediately. However, for position pieces and theoretically-driven articles such as this one, a developmental editor is consulted to provide pre-publication feedback. This helps to tighten the arguments for the next stage: post-publication peer review.

The next stage is to garner comments and reviews from other academics and the public (likely unsolicited academics). While this might take some time, the reviews (edited for relevance and brevity) will appear alongside the paper. The paper’s “success” will then be judged on those comments. No matter what the peer reviewers have to say, however, the paper will be citable in perpetuity and might well have a very different life in terms of its citation index.

Why would we want to have such alternative available to us? Such alternative forms of peer review and evaluation can both open up the scope of the scientific debate and resolve some of the vagaries of conventional peer review [3]. This is not to say that we should strive towards the “fair-and-balanced” approach of journalistic myth. Rather, it is a recognition that scientists do a lot of work (e.g. peer review, negative results, conceptual formulation) that either falls through the cracks or does not get made public. Alternative approaches such as post-publication peer review is an attempt to remedy that, and as a consequence also serve to enhance the scientific approach.

COURTESY: Figure from [5].

The rise of social media and digital technologies have also changed the need for new scientific dissemination tools. While traditional scientific discovery operates at a relatively long time-scale [6], science communication and inspiration do not. Using an open science approach will effectively open up the scientific process, both in terms of new perspectives from the community and insights that arise purely from interactions with colleagues [7].

One proposed model of multi-staged peer review. COURTESY: Figure 1 in [8].

UPDATE: 9/2/2014:

I received an e-mail from the staff at F1000Research in appreciation of this post. They also wanted me to make the following points about their version of post-publication peer review a bit more clear. So, to make sure this process is not misrepresented, here are the major features of the F1000 approach in bullet-point form:

* input from the developmental editors is usually fairly brief. This involves checking for coherence and sentence structure. The developmental process is substantial only when a paper requires additional feedback before publication.

* most papers, regardless of article type, are published within a week to 10 days of initial submission.

* the peer reviewing process is strictly by invitation only, and only reports from the invited reviewers contribute to what is indexed along with the article.

* commenting from scientists with institutional email addresses is also allowed. However, these comments do not affect whether or not the article passes the peer review threshold (e.g. two “acceptable” or “positive” reviews).

NOTES:

[1] Alicea B.   Animal-oriented virtual environments: illusion, dilation, and discovery [v1; ref status: awaiting peer review, https://f1000r.es/2xt] F1000Research 2014, 3:202 (doi: 10.12688/f1000research.3557.1).

This paper was the derivative of a Nature Reviews Neuroscience paper and several popular press interviews [ab] that resulted.

[2] Aside from an in-house editor at F1000, Corey Bohil (a colleague from my time at the MIND Lab) was also gracious enough to read through and offer commentary.

[3] Hunter, J.   Post-publication peer review: opening up scientific conversation. Frontiers in Computational Science, doi: 10.3389/fncom.2012.00063 (2012) AND Tscheke, T.   New Frontiers in Open Access Publishing. SlideShare, October 22 (2013) AND Torkar, M.   Whose decision is it anyway? f1000 Research blog, August 4 (2014).

[4]  By opening up of peer review and manuscript publication, scientific discovery might become more piecemeal, with smaller discoveries and curiosities (and even negative results) getting their due. This will produce a richer and more nuanced picture of any given research endeavor.

[5] Mandavilli, A.   Trial by Twitter. Nature, 469, 286-287 (2011).

[6] One high-profile “discovery” (even based on flashes of brilliance) can take anywhere from years to decades, with a substantial period of interpersonal peer-review. Most scientists keep a lab notebook (or some other set of records) that document many of these “pers.comm.” interactions.

[7] Sometimes, venues like F1000 can be used to feature attempts at replicating high-profile studies (such as the Stimulus-triggered Acquisition of Pluripotency (STAP) paper, which was published and retracted at Nature within a span of five months).

[8] Poschl, U.   Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Frontiers in Computational Science, doi: 10.3389/fncom.2012.00033 (2012).

Incredible, Evo-Developmental, and Aestastical Readings!

This is an example of something I do quite often on my blog Synthetic Daisies. I also run a micro-blog on Tumblr called Tumbld Thoughts. It is a sort of developmental league for features on things from my reading queue. This allows me to combine tangentially- or thematically-connected papers into a graphically-intensive single feature. I then make a meta-connection between these posts and feature it on Synthetic Daisies (to which this content is also cross-posted).

new-banner

For example, the three features in this post are based on publications, articles, and videos from my reading queue, serving up some Summertime (the Latin word for Summer is Aestas) inspiration. The title is suggestive of the emergent meta-theme (I’ll leave it up to the reader to determine what exactly that is).

I. Incredible Technologies!

Real phenomena, incredible videos. Here is a reading list on resources on how film and animation are used to advance science and science fiction alike. Here they are in no particular order:

Gibney, E.   Model Universe Recreates Evolution of the Cosmos. Nature News, May 7 (2014).

A Virtual Universe. Nature Video, May 7 (2014).

Creating Gollum. Nature Video, December 11 (2013).

Letteri, J.   Computer Animation: Digital heroes and computer-generated worlds. Nature, 504, 214-216 (2013).

Laser pulse shooting through a bottle and visualized at a trillion frames per second. Camera Culture Group YouTube Channel, December 11 (2011).

Hardesty, L.   Trillion Frame-per-Second Video. Phys.org, December 13 (2011).

Ramesh Raskar: imaging at a trillion frames per second. Femto-photography TED Talk, July 26 (2012).

Preston, E.   How Animals See the World. Nautil.us, Issue 11, March 20 (2014).

How Animals See the World. BuzzFeed Video YouTube Channel, July 5 (2012).

In June, a Synthetic Daisies post from 2013 was re-published on the science and futurism site Machines Like Us. The post, entitled “Perceptual time and the evolution of informational investment“, is a cross-disciplinary foray into comparative animal cognition, the evolution of the brain, and the evolution of technology.

II. Evo-Developmental Findings (new)!

Phylogenetic representation of sex-determination mechanism. From Reading [3].

Here are some evolution-related links from my reading queue. Topics: morphological transformations [1], colinearity in gene expression [2], and sex determination [3].

The first two readings [1,2] place pattern formation in development in an evolutionary context, while the third [3] is a brand new paper on the phylogeny, genetic mechanisms, and dispelling of common myths involved with sex determination.

III. Aestastical Readings (on Open Science)!

Welcome to the long tail of science. This tour will consist of three readings: two on the sharing of “dark data“, and one on measuring “inequality” of citation rates. In [4, 5], the authors introduce us to the concept of dark data. When a paper is published, the finished product typically includes only a small proportion of data generated to create the publication (Supplemental Figures notwithstanding). Thus, dark data is the data that are not used, ranging from superfluous analyses to unreported experiments and even negative results. With the advent of open science, however, all of these data are potentially available to both secondary analysis and presentation as something other than a formal journal paper. The authors of [5] contemplate the potential usefulness of sharing these data.

Dark data and data integration meet yet again. This time, however, the outcome might be maximally informative. From reading [5].

In the third paper [6], John Ioannidis and colleagues contemplate patterns in citation data that reveal a Pareto/Power Law structure. That is, about 1% of all authors in the Scopus database produce a large share of all published scientific papers. This might be related to the social hierarchies of scientific laboratories, as well as publishing consistency and career longetivity. But not to worry — if you occupy the long-tail, there could be many reasons for this, not all of which are harmful to one’s career.

BONUS FEATURE: To conclude, I would like to provide a window into what I have been doing for the past six months. If you read Synthetic Daisies with some regularity, you may be aware that I ran out of funding at my former academic home. As a consequence of not being able to find a replacement position, I am doing something called an academic start-up called Orthogonal Research (an open-science initiative that is intensively virtual).

The object is to leverage my collaborations to produce as much work as possible. Under this affiliation, I have worked on several papers, started on a collaborative project called DevoWorm, and advanced a vision of radically open and virtual science. While I have not been able to successfully obtain seed funding (typical of a start-up that deals in tangible goods), the goal is to produce research, a formal affiliation, and associated activities (consulting, content creation) in a structured manner, perhaps leading to future funding and other opportunities.

tumblr_msb0x7eL8H1ru0x63o4_r1_1280

My vision for open virtual science (with the Orthogonal Science logo at the lower right).

While there are limitations to this model, I have gone through two “quarters” (based on the calendar year, not financial year) of activity. The activity reports for Q1 and Q2 can be downloaded here. As it happens, this has been quite a productive six-month period.

Spread the word about this idea, and perhaps this model of academic productivity can evolve in new and ever more fruitful ways. I will be producing a white paper on the idea of a research start-up, and it should be available sometime in near future. If you are interested in discussing this more with me one-on-one, please contact me.

NOTES:

[1] Arthur, W.   D’Arcy Thompson and the Theory of Transformations. Nature Reviews Genetics, 7, 401-406 (2006).

[2] Rodrigues, A.R. and Tabin, C.J.   Deserts and Waves in Gene Expression. Science, 340, 1181-1182 (2013).

[3] Bachtrog et.al and the Tree of Sex Consortium   Sex Determination: Why So Many Ways of Doing It? PLoS Biology, 12(7), e1001899 (2014).

[4] Wallis, J.C., Rolando, E., and Borgman, C.L.   If We Share Data, Will Anyone Use Them? Data Sharing and Reuse in the Long Tail of Science and Technology. PLoS One, 8(7), e67332 (2013).

[5] Heidorn, P.B.   Shedding Light on the Dark Data in the Long Tail of Science. Library Trends, 57(2), 280-299 (2008).

[6] Ioannidis, J.P.A., Boyack, K.W., and Klavans, R.   Estimates of the Continuously Publishing Core in the Scientific Workforce. PLoS One, 9(7), e101698 (2014).

The Representation of Representations

This content is being cross-posted to Synthetic Daisies, and is the third in a three-part series on the “science of science”.

new-banner

This is the final in a series of posts on the science of science and analysis. In past posts, we have covered theory and analysis. However, there is a third component of scientific inquiry: representation. So this post is about the representation of representations, and how representations shape science in more ways than the casual observer might believe.

The three-pronged model of science (theory, experiment, simulation). Image is adapted from Fermi Lab Today Newsletter, April 27 (2012).

For the uninitiated, science is mostly analysis and data collection with theory being a supplement at best and necessary evil at worst. Ideally, modern science rests on three pillars: experiment, theory, and simulation. For these same uninitiated, the representation of scientific problems is a mystery. But in fact, it has been the most important motivation for much of the scientific results we celebrate today. Interestingly, the field of computer science relies heavily on representation, but this concern generally does not carry over into the empirical sciences.

Ideagram (e.g. representation) of complex problem solving. Embedded are a series of Hypotheses and the processes that link them together. COURTESY: Diagram from [1].


Problem Representation

So exactly what is scientific problem representation? In short, it is the basis for designing experiments and conceiving of models. It is the sieve through which scientific inquiry flows, restricting the typical “question to be asked” to the most plausible or fruitful avenues. It is often the basis of consensus and assumptions. On the other hand, representation is quite a bit more subjective than people typically would like their scientific inquiry to be. Yet this subjectivity need not lead to an endless debate about the validity of one point of view versus another. There are heuristics one can use to ensure that problems are represented in a consistent and non-leading way.

3-D Chess: a high-dimensional representation of warfare and strategy.

Models that Converge

Convergent models speaks to something I alluded to in “Structure and Theory of Theories” when I discussed the theoretical landscape of different academic fields. The first way is whether or not allied sciences or models point in the same direction. To do this, I will use a semi-hypothetical example. The hypothetical case is to consider three models (A, B, and C) of the same phenomenon. Each of these models make different assumptions and includes different factors, but should at least be consistent with each other. One real-world example of this is the use of gene trees (phylogenies) and species trees (phylogenies) to understand evolution in a lineage [2]. In this case, each model uses the same taxa (evolutionary scenario), but includes incongruent data. While there are a host of empirical reasons why these two models can exhibit incongruence [3], models that are as representationally complete as possible might resolve these issues.

Orientation of Causality

The second way is to ensure that the one’s representation gets the source of causality right. For problems that are not well-posed or poorly characterized, this can be an issue. Let’s take Type III errors [4] as an example of this. In hypothesis testing, type III errors involve using the wrong explanation for a significant result. In layman’s terms, this is getting the right answer for the wrong reasons. Even more than in the  case of type I and II errors, focusing on the correct problem representation plays a critical role in resolving potential type III errors.

Yet problem representation does not always help resolve these types of errors. Skeptical interpretation of the data can also be useful [5]. To demonstrate this, let us turn to the over-hyped area of epigenetics and its larger place in evolutionary theory. Clearly, epigenetics plays some role in the evolution of life, but is not deeply established in terms of models and theory. Because of this representational ambiguity, some interpretations play a trick. In a conceptual representation that embodies this trick, scarcely-understood high-level phenomena such as epigenetics will usurp the role of related phenomena such as genetic diversity and population processes. When the thing in your representation is not well-defined or quite popular (e.g. epigenetics), it can take on a causal life of its own. Posing the problem in this way allows us to obscure known dependencies between genes, genetic regulation, and the environment without proving exceptions to these established relationships.

Popularity is Not Sufficiency

The third way is to understand that popular conceptions do not translate into representational sufficiency. In logical deduction, it is often pointed out that necessity does not equal sufficiency. But as with the epigenetics example, it also holds that popularity cannot make something sufficient in and of itself. In my opinion, this is one of the problems with using narrative structures in the communication of science: sometimes an appealing narrative does more to obscure scientific findings than it does in making things accessible to lay people.

Fortunately, this can be shown by looking at media coverage of any big news story. The CNN plane coverage [6] shows this quite clearly: coverage of rampant speculation and conspiracy theory was a way to emphasize an increasingly popular story. In such cases, speculation is the order of the day, while thoughtful analysis gets pushed aside. But is this simply a sin of the uninitiated, or can we see parallels of this in science? Most certainly, there is a problem with recognizing the difference between “popular” science and worthwhile science [7]. There is also precedence from the way in which certain studies or areas of study are hyped. Some in the scientific community [8] have argued that Nature’s hype of the ENCODE project [9] results fell into this category.

One example of a mesofact: ratings for the TV show The Simpsons over the course of several hundred episodes. COURTESY: Statistical analysis in [10].

Mesofacts

Related to these points is the explicit relationship between data and problem representation. In some ways, this brings us back to a computational view of science, where data do not make sense unless it is viewed in the context of a data structure. But sometimes the factual aspect of data varies over time in a way that obscures our mental models, and in turn obscures problem representation.

To make this explicit, Sam Arbesman has coined the term “mesofact” [11]. A mesofact is knowledge that changes slowly over time given new data. Populations of specific places (e.g. Minneapolis, Bolivia, Africa) has changed in both absolute and relative terms over the past 50 years. But when problems and experimental designs are formulated assuming that facts related to these data (e.g. rank of cities by population) do not change over time, we can get the analysis fundamentally wrong.

This may seem like a trivial example. However, mesofacts have relevance to a host of problems in science, from experimental replication to inferring the proper order of causation. The problem comes down to an interaction between data’s natural variance (variables) and the constructs used to represent our variables (facts). When the data exhibit variance against an unchanging mean, it is much easier to use this variable as a stand-in for facts. But when this is not true, scientifically-rigorous facts are much harder to come by. Instead of getting into an endless discussion about the nature of facts, we can instead look to how facts and problem representation might help us tease out the more metaphysical aspects of experimentation.

Applying Problem Representation to Experimental Manipulation

When we do experiments, how do we know what our experimental manipulations really mean? The question itself seems self-evident, but perhaps it is worth exploring. Suppose that you wanted to explore the causes of mental illness, but did not have the benefits of modern brain science as a guide. In defining mental illness itself, you might work from a behavioral diagnosis. But the mechanisms would still be a mystery. Is it a supernatural mechanism (e.g. demons) [12], an ultimate form of causation (reductionism), or a global but hard-to-see mechanism (e.g. quantum something) [13]? An experiment done the same way but assuming three different architectures could conceivably yield statistical significance for all of them.

In this case, a critical assessment of problem representation might be able to resolve this ambiguity. This is something that as modelers and approximators, computational scientists deal with all of the time. Yet it is also an implicit (and perhaps even more fundamental) component of experimental science. For most of the scientific method’s history, we have gotten around this fundamental concern by relying on reductionism. But in doing so, this restricts us to doing highly-focused science without appealing to the big picture. In a sense, we are blinded by science by doing science.

Focusing on problem representation allows us a way out of this. Not only does it allow us to break free from the straightjacket of reductionism, but also allows us to address the problem of experimental replication more directly. As has been discussed in many other venues [14], the lack of an ability to replicate experiments has plagued both Psychological and Medical research. But it is in these areas which representation is most important, primarily because it is hard to get right. Even in cases where the causal mechanism is known, the underlying components and the amount of variance they explain can vary substantially from experiment to experiment.

Theoretical Shorthand as Representation

Problem representation also allows us to make theoretical statements using mathematical shorthand. In this case, we face the same problem as the empiricist: are we focusing on the right variables? More to the point, are these variables fundamental or superficial? To flesh this out, I will discuss two examples of theoretical shorthand, and whether or not they might be concentrating on the deepest (and most generalizable) constructs possible.

The first example comes from Hamilton’s rule, derived by the behavioral ecologist W.D. Hamilton [15]. Hamilton’s rule describes altruistic behavior in terms of kin selection. The rule is a simple linear equation that assumes adaptive outcomes will be optimal ones. In terms of a representation, these properties provide a sort of elegance that makes it very popular.

In this short representation, an individual’s relatedness to a conspecific contributes more to their behavioral motivation to help that individual than a typical trade-off between costs and benefits. Thus, a closely-related conspecific (e.g. a brother) will invest more into a social relationship with their kin than with non-kin. In general, they will take more personal risks in doing so. While more math is used to support the logic of this statement [15], this inequality is often treated as a widely applicable theoretical statement. However, some observers [16] have found the parsimony of this representation to be both too incomplete and intellectually unsatisfying. And indeed, sometimes an over-simplistic model does not deal with exceptions well.

The second example comes from Thomas Piketty’s work. Piketty, economist and author of “Capital in the 21rst Century” [17], has proposed something he calls the “First Law” which explains how income inequality relates to economic growth. The formulation, also a simple inequality, characterizes the relationship between economic growth, inherited wealth, and income inequality within a society.

In this equally short representation, inequality is driven by the relative dominance of two factors: inherited wealth and economic growth. When growth is very low, and inherited wealth exists at a nominal level, inequality persists and dampens economic mobility. In Piketty’s book, other equations and a good amount of empirical investigation is used to support this statement. Yet, despite its simplicity, it has held up (so far) to the scrutiny of peer review [18]. In this case, representation through variables that generalize greatly but do not handle exceptional behavior well produce a highly-predictive model. On the other hand, this form of representation also makes it hard to distinguish between a highly unequal post-industrial society and a feudal, agrarian one.

Final Thoughts

I hope to have shown you that representation is an underappreciated component of doing and understanding science. While the scientific method is our best strategy for discovering new knowledge about the natural world, it is not without its burden of conceptual complexity. In the theory of theories, we learned that formal theories are based on both deep reasoning and are (by necessity) often incomplete. In the analysis of analyses, we learned that the data are not absolute. Much reflection and analytical detail must be taken to ensure that an analysis represents meaningful facets of reality. And in this post, these loose ends were tied together in the form of problem representation. While an underappreciated aspect of practicing science, representing problems in the right way is essential for separating out science from pseudoscience, reality from myth, and proper inference from hopeful inference.

 

NOTES:

[1] Eldrett, G.   The art of complex problem-solving. MediaExplored blog, July 10 (2010).


[2] Nichols, R.   Gene trees and species trees are not the same. Trends in Ecology and Evolution, 16(7), 358-364 (2001).

 

[3] Gene trees and species trees can be incongruent for many reasons. Nature Knowledge Project (2012).

 

[4] Schwartz, S. and Carpenter, K.M.   The right answer for the wrong question: consequences of type III error for public health research. American Journal of Public Health, 89(8), 1175–1180 (1999).

 

[5] It is important here to distinguish between careful skepticism and contrarian skepticism. In addition, skeptical analysis is not always compatible with the scientific method.

 

For more, please see: Myers, P.Z.   The difference between skeptical thinking and scientific thinking. Pharyngula blog, June 18 (2014) AND Hugin   The difference between “skepticism” and “critical thinking”? RationalSkepticism.org, May 19 (2010).

 

[6] Abbruzzese, J.   Why CNN is obsessed with Flight 370: “The Audience has Spoken”. Mashable, May 9 (2014).

 

[7] Biba, E.   Why the government should fund unpopular science. Popular Science, October 4 (2013).
[8] Here are just a few examples of the pushback against the ENCODE hype:

a) Mount, S.   ENCODE: Data, Junk and Hype. On Genetics blog, September 8 (2012).

b) Boyle, R.   The Drama Over Project Encode, And Why Big Science And Small Science Are Different. Popular Science, February 25 (2013).

c) Moran, L.A.   How does Nature deal with the ENCODE publicity hype that it created? Sandwalk blog, May 9 (2014).

[9] For an example of the nature of this hype, please see: The Story of You: ENCODE and the human genome. Nature Video, YouTube, September 10 (2012).

[10] Fernihough, A.   Kalkalash! Pinpointing the Moments “The Simpsons” became less Cromulent. DiffusePrior blog, April 30 (2013).

[11] Arbesman, S.   Warning: your reality is out of date. Boston Globe, February 28 (2010). Also see the following website: https://www.mesofacts.org/


[12] Surprisingly, this is a contemporary phenomenon: Irmak, M.K.   Schizophrenia or Possession? Journal of Religion and Health, 53, 773-777 (2014). For a thorough critique, please see: Coyne, J.   Academic journal suggests that schizophrenia may be caused by demons. Why Evolution is True blog, June 10 (2014).

[13] This is an approach favored by Deepak Chopra. He borrows the rather obscure idea of “nonlocality” (yes, basically a wormhole in spacetime) to explain higher levels of conscious awareness with states of brain activity.

[14] Three (divergent) takes on this:

a) Unreliable Research: trouble at the lab. Economist, October 19 (2013).

b) Ioannidis, J.P.A.   Why Most Published Research Findings Are False. PLoS Med 2(8): e124 (2005).

c) Alicea, B.   The Inefficiency (and Information Content) of Scientific Discovery. Synthetic Daisies blog, November 19 (2013).

[15] Hamilton, W. D.   The Genetical Evolution of Social Behavior. Journal of Theoretical Biology, 7(1), 1–16 (1964). See also: Brembs, B.   Hamilton’s Theory. Encyclopedia of Genetics.

[16] Goodnight, C.   Why I Don’t like Kin Selection. Evolution in Structured Populations blog, April 23 (2014).

[17] Piketty, T.   Capital in the 21st Century. Belknap Press (2014). See also: Galbraith, J.K.   Unpacking the First Fundamental Law. Economist’s View blog, May 25 (2014).

[18] DeLong, B.   Trying, yet again, to communicate the arithmetic scaffolding of Piketty’s “capital in the Twenty-First Century”. Washington Center for Equitable Growth blog, June 5 (2014).

The Analysis of Analyses

This material is cross-posted to Synthetic Daisies. This is part of a continuing series on the science of science (or meta-science, if you prefer). The last post was about the structure and theory of theories.

new-banner

In this post, I will discuss the role of data analysis and interpretation. Why do we need data, as opposed to simply observing the world or making up stories? The simple answer: it gives us a systematic accounting of the world in general and experimental manipulations in particular. As opposed to the apparition on a piece of toast, it provides a systematic accounting of the natural world independent of our sensory and conceptual biases. But as we saw in the theory of theories post, and as we will see in this post, it takes a lot of hard work and thoughtfulness. What we end up with is an analysis of analyses.

Data take many forms, so approach analysis with caution. COURTESY: [1].

Introduction

What exactly is data, anyways? We hear a lot about it, but rarely stop to consider why it is so potentially powerful. Data are both an abstraction of and incomplete sampling (approximation) of the real world. While the data are not absolute (e.g. you can always have more data or more completely sample the world), the data provide a means of generalization that is partially free from stereotyping. And as we can see in the cartoon above, not all data that influence our hypothesis can even be measured. Some of it is beyond the scope of our current focus and technology (e.g hidden variables), while some of it consists of interactions between variables.

In the context of the theory of theories, data has the same advantage over anecdote that deep, informed theories have over naive theories. In the context of the analysis of analyses, data does not speak for itself. To conduct a successful analysis of analysis, it is important to be both interpretive and objective. Finding the optimal balance between each of these gives us an opportunity to reason more clearly and completely. If this causes some people to lose their view of data as infallible, then so be it. Sometimes the data fails us, and other times we fail ourselves.

When it comes to interpreting data, the social psychologist Jon Haidt suggests that “we think we are scientists, but we are actually laywers” [2]. But I would argue this is where the difference between the untrained eyes sharing Infographics and the truly informed acts of analysis and data interpretation becomes important. The latter is an example of a meta-meta-analysis, or a true analysis of analyses.

The implications of Infographics are clear (or are they?) COURTESY: Heatmap, xkcd.

NHST: the incomplete analysis?

I will begin our discussion with a current hot topic in the field of analysis. It involves interpreting statistical “significance” using an approach called Null Hypothesis Statistical Testing (or NHST). If you have even done a t-test or ANOVA, you have used this approach. The current discussion about the scientific replication crisis is tied to the use (and perhaps overuse) of these types of tests. The basic criticism involves the inability of NHST statistics to conduct multiple tests properly and properly deal with experimental replication.

Example of the NHST and its implications. COURTESY: UC Davis StatWiki.

This has even led scientists such as John Ioannidis to demonstrate why “most significant results are wrong”. But perhaps this is just to make a rhetorical point. The truth is, our data are inherently noisy. Too many assumptions/biases go into collecting most datasets, all for data which has too little known structure. Not only are our data noisy, but in some cases may also possess hidden structure which violates the core assumptions of many statistical tests [3]. Some people have rashly (and boldly) proposed that this points to flaws in the entire scientific enterprise. But, like most things, this does not take into account the nature of the empirical enterprise and reification of the word significance.

 

A bimodal (e.g. non-normal) distribution, being admonished by its unimodal brethren. Just one case in which the NHST might fail us.

The main problem with the NHST is that it relies upon distinguishing signal from noise [4], but not always in the broader context of effects size or statistical power. In a Nature News correspondence [5], Regina Nuzzo discusses the shortcomings of the NHST approach and tests of statistical significance (e.g. p-values). Historical context of the so-called frequentist approach [6] is provided, and its connection to assessing the validity of experimental replications are discussed. One possible solution is the use of Bayesian techniques [7] to assess something called statistical power. The Bayesian approach allows one to use a prior distribution (or historical conditioning) to better assess the meaningfulness of one’s statistically significant result. But the construction of priors relies on the existence of reliable data. If these data do not exist for some reason, we are back to square one.

Big Data and its Discontents

Another challenge to conventional analysis involves the rise of so-called big data. Big data is the collection and analysis of very large datasets, which come from sources such as high-throughput biology experiments, computational social science, open-data repositories, and sensor networks. Considering their size, big data analyses should allow for good power and ability to distinguish signal from noise. Yet due to their structure, we are often required to rely upon correlative analyses. While correlation is equated with relational information, it (as it always has) does not equate to causation [8]. Innovations in machine learning and other data modeling techniques can sometimes overcome this limitation, but correlative analyses are still the easiest way to deal with these data.

 

IBM’s Watson: powered by large databases and correlative inference. Sometimes this cognitive heuristic works well, sometimes not so much.

Given a large enough collection of variables with a large number of observations, correlations can lead to accurate generalizations about the world [9]. The large number of variables are needed to extract relationships, while the large number of observations are needed to understand the true variance. This can be a problem where subtle, higher-order relationships (e.g. feedbacks, time-dependent saturations) exist or when the variance is not uniform with respect to the mean (e.g. bimodal distributions).

Complex Analyses

Sometimes large datasets require more complicated methods to find relevant and interesting features. These features can be thought of as solutions. How do we use complex analysis to find these features? In the world of analysis of analyses, large datasets can be mapped to solution spaces with a defined shape. This strategy uses convergence/triangulation as a guiding principle, but does so through the rules of metric geometry and computational complexity. A related and emerging approach called topological data analysis [10] can be used to conduct rigorous relational analyses. Topological data analysis takes datasets and maps them to a geometric shape (e.g. topology) such as a tree or in this case a surface.

 

 

A portrait of convexity (quadratic function). A gently sloping dataset, a gently sloping hypothesis space. And nothing could be further from the truth……

In topological data analyses, the solution space encloses all possible answers on a surface, while the surface itself has a shape that represents how easy it is to move from one portion of the solution space to another. One common assumption is that this solution space is known and finite, while the shape is convex (e.g. a gentle curve). If that were always true, then analysis would be easy: we could use a moderate large-sized dataset to get the gist of patterns in the data. any additional scientific inquiry would constitute filling in the gaps. And indeed sometimes it works out this way.

 

One example of a topological data analysis of most likely Basketball positions (includes both existing and possible positions). COURTESY: Ayasdi Analytics and [10].

The Big Data Backlash…..Enter Meta-Analysis

Despite its successes, there is nevertheless a big data backlash. Ernest Davis and Gary Marcus [11] present us with nine reasons why big data are problematic. Some of these have been covered in the last section, while others suggest that there can be too much data. This is an interesting position, since it is common wisdom that more data always give you more resolution and insight. Insight and information can be obscured by noisy or irrelevant data. But even the most informative of datasets can yield misinformed analyses if the analyst is not thoughtful.

Of course, ever-bigger datasets by themselves do not give us the insights necessary to determine whether or not a generalized relationship is significant. The ultimate goal of data analysis should be to gain deep insights into whatever the data represent. While this does involve a degree of interpretive subjectivity, it also requires an intimate dialogue between analysis, theory, and simulation. Perhaps the latter is much more important, particularly in cases where the data are politically or socially sensitive. These considerations are missing from much contemporary big data analysis [12]. This vision goes beyond the conventional “statistical test on a single experiment” kind of experimental investigation, and leads us to meta-analysis.

The basic premise of a meta-analysis is to use a strategy of convergence/triangulation to converge upon results using a series of studies. The logic here involves using the power of consensus and statistical power to arrive at a solution. The problem is represented as a series of experiments with an effect size for each. For example, if I believe that eating oranges causes cancer, how should I arrive at a sound conclusion? One study with a very large effect size, or many studies with various effect sizes and experimental contexts. According to the meta-analysis view, the latter should be most informative. In the case of potential factors in myocardial infarction [13], significant results that all point in the same direction (with minimum effect size variability) lend the strongest support to a given hypothesis.

 

Example of a meta-analysis. COURTESY: [13].

The Problem with Deep Analysis

We can go even further down the rabbit hole of analysis, for better or for worse. However, this often leads to problems of interpretation, as deep analyses are essentially layered abstractions. In other words, they are higher-level abstractions dependent upon lower-level abstractions. This leads us to a representation of representations, which will be covered in an upcoming post. Here, I will propose and briefly explore two phenomena: significant pattern extraction and significant reconstructive mimesis.

One form of deep analysis involves significant pattern extraction. While the academic field of pattern recognition has made great strides [14], sometimes the collection of data (which involve pre-processing and personal bias) is flawed. Other times, it is the subjective interpretation of these data which are flawed. In either case, this results in the extraction patterns that make no sense that are then assigned significance. Worse yet, some of these patterns are also thought to be of great symbolic significance [15]. The Bible Code is one example of such pseudo-analysis. Patterns (in this case secret codes) are extracted from a database (a book), and then these data are probed for novel but coincidental pattern formation (codes formed by the first letter of every line of text). As this is usually interpreted as decryption (or deconvolution) of an intentionally placed message, significant pattern extraction is related to the deep, naive theories discussed in “Structure and Theory of Theories”.

 

Congratulations! Your pattern recognition algorithm came up with a match. Although if it were a computer instead of a mind, it might do a more systematic job of rejecting it as a false positive. LESSON: the confirmatory criteria for a significant result needs to be rigorous.

But suppose that our conclusions are not guided by unconscious personal biases or ignorance. We might intentionally leverage biases in the service of parsimony (or making things simpler). Sometimes, the shortcuts we take in representing natural processes present difficulties in understanding what is really going on. This is a problem of significant reconstructive mimesis. In the case of molecular animations, this has been pointed out by Carl Zimmer [16] and PZ Myers [17] for molecular animations. In most molecular animations, processes occur smoothly (without error) and within full view of the human observer. Contrast this with the inherent noisiness and spatially-crowded environment of the cell, which is highly realistic but not very understandable. In such cases, we construct a model which consists of data, but that model is selective and the data is deliberately sparse (in this case smoothed). This is an example of a representation (the model) that informs an additional representation (the data). For purposes of simplicity, the model and data are somehow compressed to preserve signal and remove noise. And in the case of a digital image file (e.g. .jpg.gif) such schemes work pretty well. But in other cases, the data are not well-known, and significant distortions are actually intentional. This is where big challenges arise in getting things right.

 

An multi-layered abstraction from a highly-complex multivariate dataset? Perhaps. COURTESY: Salvador Dali, Three Sphinxes of Bikini.

Conclusions

Data analysis is hard. But in the world of everyday science, we often forget how complex and difficult this endeavor is. Modern software packages have made the basic and well-established analysis techniques deceptively simple to employ. In moving to big data and multivariate datasets, however, we begin to face head-on the challenges of analysis. In some cases, highly effective techniques have simply not been developed yet. This will require creativity and empirical investigation, things we do not often associate with statistical analysis. It will also require a role for theory, and perhaps even the theory of theories.

As we can see from our last few examples, advanced data analysis can require conceptual modeling (or representations). And sometimes, we need to map between domains (from models to other, higher-order models) to make sense of a dataset. This, the most complex of analyses, can be considered representations of representations. Whether a particular representation of a representation is useful or not depends upon how much noiseless information can be extracted from the available data. Particularly robust high-level models can take very little data and provide us with a very reliable result. But this is an ideal situation, and often even the best models presented with large amounts of data can fail to given a reasonable answer. Representations of a representations also provide us with the opportunity to imbue an analysis with deep meaning. In a subsequent post, I will this out in more detail. For now, I leave you with this quote:

 

“An unsophisticated forecaster uses statistics as a drunken man uses lampposts — for support rather than for illumination.” Andrew Lang.

 

NOTES:

[1] Learn Statistics with Comic Books. CTRL Lab Notebook, April 14 (2011).

[2] Mooney, C.   The Science of Why We Don’t Believe Science. Mother Jones, May/June (2011).

[3] Kosko, B.   Statistical Independence: What Scientific Idea Is Ready For Retirement. Edge Annual Question (2014).

[4] In order to separate signal from noise, we must first define noise. Noise is consistent with processes that occur at random, such as the null hypothesis or a coin flip. Using this framework, a significant result (or signal) is a result that deviates from random chance to some degree. For example, a p-value of 0.05 represents a 95% chance that the replicates observed could not have occurred due to chance. This is, of course, an incomplete account of the relationship between signal and noise. Models such as Signal Detection Theory (SDT) or data smoothing techniques can also be used to improve the signal-to-noise ratio.

[5] Nuzzo, R.   Scientific Method: Statistical Errors. Nature News and Comment, February 12 (2014).

[6] Fox, J.   Frequentist vs. Bayesian Statistics: resources to help you choose. Oikos blog, October 11 (2011).

[7] Gelman, A.   So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing. Statistical Modeling, Causal Inference, and Social Science blog, April 2 (2011).

[8] For some concrete (and satirical) examples of how correlation does not equal causation, please see Tyler Vigen’s Spurious Correlations blog.

[9] Voytek, B.   Big Data: what’s it good for? Oscillatory Thoughts blog, January 30 (2014).

[10] Beckham, J.   Analytics Reveal 13 New Basketball Positions. Wired, April 30 (2012).

[11] Davis, E. and Marcus, G.   Eight (No, Nine!) Problems with Big Data. NYTimes Opinion, April 6 (2014).

[12] Leek, J.   Why big data is in trouble – they forgot applied statistics. Simply Statistics blog, May 7 (2014).

[13] Egger, M.   Bias in meta-analysis detected by a simple, graphical test. BMJ, 315 (1997).

[14] Jain, A.K., Duin, R.P.W., and Mao, J.   Statistical Pattern Recognition: a review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 4-36 (2000).

It is interesting to note that the practice of statistical pattern recognition (training a statistical model with data to evaluate additional instances of data) has developed techniques and theories related to rigorously rejecting false positives and other spurious results.

[15] McCardle, G.     Pareidolia, or Why is Jesus on my Toast? Skeptoid blog, June 6 (2011).

[16] Zimmer, C.     Watch Proteins Do the Jitterbug. NYTimes, April 10 (2014).

[17] Myers, P.Z.   Molecular Machines! Pharyngula blog, September 3 (2006).

The Structure and Theory of Theories

This content is being cross-posted to Synthetic Daisies. This post represents a first-pass approximation (and is perhaps a confounded, naive theory in itself). Hope you find it educational at the very least. Also, check out Carnival of Evolution #70 (the game of evolution), now live at Synthetic Daisies. Carnival of Evolution is a monthly blogroll that I have been hosting once a year for the last 3 years. Lots of interesting science-related readings there.

Are all theories equal? In an age where creationism is making its way into the school curriculum (under the guise of intelligent design) and forms of denialism and conspiracy theory are becoming mainstream, this is an important question. While classic philosophy of science and logical positivist approaches simply assume that the best theories evolve through the scientific process, living in an era of postmodernism, multiculturalism, and the democratization of information, demands that we think about this in a new way.

Sense-making as Layers of Information

By taking cues from theoretical artificial intelligence and contemporary examples, we can revise the theory of theories. Indeed, we live in interesting times. But what is a theory —  and why do people like to say it’s “just a theory” when they disagree with the prevailing model? One popular view of theory is that of “sense-making” [1]: that is, theories allow us to synthesize empirical observations into a mental model that allows us to generalize without becoming overwhelmed by complexity or starting from scratch every time we need to make a predictive statement.

 

The process of making sense of the world by building theories. Keep this in mind as we discuss the differences between naive and informed theories. COURTESY: Figure 2 in [1b].

Yet sense-making is not the whole story, particularly when theories compete for acceptance [2]. Are all theories equal, or are some theories more rigorous than others? This question is in much the same vein as the critique of “absolute facts” in postmodern theory. To make sense of this, I propose that there are actually two kinds of theory: naive theories and informed theories. Naive theories rely on common sense, and can often do very well as heuristic guides to the world. However, they tend to fall apart when presented with counter-intuitive phenomena. This is where informed theory becomes important. Informed theories are not synonymous with scientific theories — in fact, some ancient beliefs and folk theories can fall into this category alongside formal scientific theories. We will see the reasons this nominal equivalence (and non-equivalence of more naive theories) as we go through the next few paragraphs.

Naive and informed theories can be distinguished by their degree of “common sense”. Normally, common sense is a value judgement. In this case, however, common sense involves a lack of information. Naive theories tend to be intuitive rather than counterintuitive. Naive theories are constructed only from immediate observations and abductive reasoning between these observations. Naive theoretical synthesis can be thought of as a series of “if-and-then” statements. For example, if A and B are observed, and they can be linked through co-occurrence or some other criterion, then they are judged to be plausible outcomes.

The role of abductive theories in organizations. COURTESY: Free Management Library.

Informed theories, on the other hand, utilize deduction and can be divided into working theories (e.g. heuristics) and deep theories that explain, predict, and control. Working theories tend to utilize inductive logic, whereas deep theories tend to rely upon deductive logic. Since deep theories are inductive, they tend to be multi-layered constructs with mechanisms and premises based on implicit assumptions [3]. As a deductive construct, a deep informed theory can lead to inference. Inference gives us a powerful way to predict outcomes that are not so intuitive. The inference of common ancestors in phylogenetic theory allows us to reconstruct common ancestors to extant species that may look nothing like an “average” or a “cross” between these descendants.

A contingency table showing the types and examples of naive and informed theories.

 

 

 

NAIVE

 

INFORMED

 

SHALLOW

 

Cults, Philosophies based on simple principles

 

Pop-psychology and pop-science

 

DEEP

 

Conspiracy theories

 

Scientific theories

Naive and informed theories can also be distinguished by their degree of complexity. As they are based on uninformed intuition, naive theories are self-evident and self-complete, perhaps too much so. Fundamentalist religious belief and denialist-based political philosophies are based on simple sets of principles and are said by some to be tightly self-referential [4]. This inflexible self-referential capacity these theories rely on common sense over social complexity. Conspiracy theories and denialist tendencies are deeper versions of naive theories [5], but unlike their informed counterparts, do not get by on objective data, and are particularly resistant to updating [6]. By contrast, formal theories are based on abstractions and possess incompleteness-tolerance. This is often by necessity, as we cannot observe every instance of every associated process we would like to understand.

Sometimes the deepest naive theories lead to conspiracies. I have it on the highest authority.

Theory of Ontological Theories?

This leads us to an interesting set of questions. One, are the informed theories that currently exist in many fields of inquiry inevitable outcomes? Second, why are some fields more theoretical than others, and why are theory and data more integrated in some fields but not others? This is a question of historical contingency vs. field-specific structure. Is the state of theory in different areas of science due to historical context or a consequence of the natural laws they purport to make sense of? To answer these three questions, we will not briefly examine five examples from various academic disciplines. Underlying many of these approaches to informed theory is an assumption: theories are a search for ontological truths rather than the product of interactions among privileged experts. This is where informed theories hold an advantage — they can change gradually with regard to new data and hypotheses while also remaining relevant. This is an ideal in any case, so let us get to the examples:

1) Economics has an interesting relationship to theory. Formal macroeconomic theory involves two schools of thought: freshwater and saltwater. The former group favors the theories of the free-market, while the latter group adhere to Keynesian principles. However, there are also adherents of political economy, who favor models of performativity over formal mathematical models. Since the financial crisis of 2008, there has been a rise of interest in alternative economic theories and associated models, perhaps serving as an example of how theories change and are supplanted over time. And, of course, a common naive theory of economics is based on confounding micro- (or household) and macro- (or national-scale) economics.

2) Physics is though of as the gold standard of scientific theory. For example, “Einstein” is synonymous with “theory” and “genius”. The successes of deep, informed theories such as relativity and quantum mechanics is well-known. Aside from explanation and prediction of physics theory are logical consistency and grand unification as an enterprise that can often be separated from experimentation. As the gold standard of scientific theory, physics also provides a theoretical conduit to other disciplines, sometimes without modification. We will discuss this further in point #5.

 

This book [7] is a statement on self-anointed “bad” theories. The statement is: although string theory is structurally elegant, it is not functionally elegant like quantum gravity. But does that make quantum gravity a superior theory?

3) In neuroscience and cell biology, theories are as often deemed superfluous and inherently incomplete in lieu of ever more data. This is partially due to our level of understanding relative to the complex nature of these fields. Yet many naive and informed social theories exist, despite the complexity of the social world. So what is the difference? It could be a matter of neuroscientists and cell biologists not being oriented towards theoretical thinking. This may explain why computation neuroscience and systems biology exist as fields quite independent of their biological counterparts.

4) Theoretical constructs associated with evolution by natural selection are the consensus in evolutionary biology. This wasn’t always the case, however, as 19th century German embryologists and 18th century adherents to Lamarkian theory had competing ideas of how animal diversity was produced and perpetuated. However, Darwinian notions of evolution by natural selection did the best job at synthesizing previous knowledge about natural history with a formal mechanism for descent with modification. In popular culture, there has always been a resistance to Darwinian evolution. Usually, these divine creation-inspired naive theories are embraced as a contrarian counterbalance to deep, informed theory advocated by scientific authorities. In this case, theories have a social component, as Social Darwinism (a social co-option of Darwinian evolution) was popular in the 19th and early 20th centuries.

5) Because informed theories can explain invariants of the natural world, they often cross academic disciplines. Sometimes these crosses are direct. Evolutionary Psychology is one such example. Evolutionary theory can explain biological evolution, and as we are the products of evolution, the same theory should explain the evolution of the human mind. A simple analogical transfer, but much harder to yield the same results. But sometimes theories cross into domains not because of their suitability for the problem at hand, but because they are mathematically rigorous and/or have great predictive power in their original domain. The “quantum mind” is one such example of this. Is “quantum mind” theory any better or more powerful than a naive theory about how the mind works? It is unclear. However, this co-option suggests that even the most reputable informed theories can be cultural artifacts. A real caveat emptor.

Roger Penrose et.al [8] will tell us about everything, in the spirit of physics and mathematics.

Properties of the Theory of Theories

The inherent dualisms of the theory of theories stems from deeper cognitive divisions between matter-of-fact and abstract thinking. As cultural constructs, matter-of-fact theories are much more amenable to narrative structures that permeate folklore and pseudo-science. This does not mean that abstract theories are “better” or any more “scientific” than matter-of-fact formulations. In fact, abstract theories are more susceptible to cultural blends [9] or symbolic confabulation [10], as these short-cuts aid us in conceptual understanding.

Scientific theories tend to be abstract, informed ones, but scientific theories that are more well-known by the general public have many features of naive theories. Examples of this include Newtonian physics and the Big Bang. There is a certain intuitive satisfaction from these two theories that are not offered by, say, quantum theory or Darwinian evolution [11]. This satisfaction arises from consistency with one’s immediate sensory surroundings and/or existing cultural myths. Interestingly, naive (and mythical) versions of quantum theory and Darwinian evolution have arisen alongside the more formal theory. These faux-theories use their informed theory counterparts as a narrative template to explain everything from the spiritual basis of the mind (Chopra’s Nonlocality) to social inequalities (Spencer’s Social Darwinism).

But what about beauty in theory? Again, this could arguably be a feature of naive theorizing. Whether it is the over-application of parsimony or an over-reliance on elegance and beauty [7], informed theories require a degree of initial convolution before such features can be incorporated into the theory. In other words, these things should not be goals in and of themselves. Rather, deep, informed theories should be robust enough to be improved upon incrementally without having to be being completely replaced [12]. The beauty of parsimony and symmetry should only considered to be a nice side-benefit. There is also a significant role for mental and statistical models in theory-building, but for the sake of relative simplicity I am intentionally leaving this discussion aside for now.

 

Tides go in, tides go out. When it’s God’s will, it’s a short and neat proposition. When it’s more complicated, then it’s scientific inquiry. COURTESY: Geekosystem and High Power Rocketry blogs.

In a future post, I will move from the notion of a theory of theories to the need for an analysis of analyses. Much like the theory of theories, a deep reconsideration of analysis is also needed. This has been driven by the scientific replication crisis, the proliferation of data (e.g. infographics) on the internet, and the rise of big data (e.g. very large datasets, once again enabled by the internet).

NOTES:

[1] Here are a few references on the cognition of sense-making, particularly as it related to theory construction:

a) Klein, G., Moon, B. and Hoffman, R.F.   Making sense of sensemaking I: alternative perspectives. IEEE Intelligent Systems, 21(4), 70–73 (2006).

b) Pirolli, P., & Card, S.   The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. Proceedings of the International Conference on Intelligence Analysis (2005).

[2] Here are some references that will help you understand the “hows” and “whys” of theory competition, with particular relevance to what I am calling deep, informed theories:

a) Steiner, E.   Methodology of Theory-building. Educology research Associates, Sydney (1988).

b) Kuhn, T.   The structure of scientific revolutions. University of Chicago Press (1962).

c) Arbesman, S.   The Half-life of Facts. Current Press (2012).

[3] sometimes, naive theorists will accuse deep, informed theorists of being “stupid” or “irrelevant”. This is because the theories generated do not conform to the expectations and understandings of the naive theorist.

Paul Krugman calls one such instance “the myth of the progressive economist”: Krugman, P.   Stupidity in Economic Discourse 2. The Conscience of a Liberal blog, April 1 (2014).

[4] Religious fundamentalist and  denialist groups also seem to theorize in a deep naive manner, using a tightly self-referential set of theoretical propositions. In these cases, however, common sense is replaced with a intersubjective (e.g. you have to be part of the group to understand) self-evidence. The associated logical extremes tend to astound people not in the “know”.

a) Example from religious fundamentalism: Koerth-Baker, M.   What do Christian fundamentalists have against set theory? BoingBoing, August 7 (2012) AND Simon, S.   Special Report: Taxpayers fund creationism in the classroom. Politico Pro, March 24 (2014).

For a discussion of Nominalism (basic math) vs. Platonism (higher math) in Mathematics, please see: Franklin, J.   The Mathematical World. Aeon Magazine, April 7 (2014).

b) Example from climate change denialism: Cook, J. and Lewandowsky, S.   Recursive Fury: facts and misrepresentations. Skeptical Science blog, March 21 (2013).

[5] for one such example, please see: Roberts, D.   Conservative hostility to science predates climate science. Grist.org, August 12 (2013).

For a more comprehensive background on naive theories (in this case, the development of naive theories of physics among children) please see the following:

a) Reiner, M., Slotta, J.D., Chi, M.T.H., and Resnick, L.B.   Naive Physics Reasoning: a commitment to substance-based conceptions. Cognition and Instruction, 18(1), 1-34 (2000).

b) Vosniadou, S.   On the Nature of Naive Physics. In “Reconsidering Conceptual Change: issues in theory and practice”, M. Limon and L. Mason, eds., Pgs. 61-76, Kluwer Press (2002).

For the continued naive popularity of the extramission theory of vision, please see the following:

c) Winer, G. A., Cottrell, J. E., Gregg, V., Fournier, J. S., & Bica, L. A. (2002). Fundamentally misunderstanding visual perception: Adults’ beliefs in visual emissions. American Psychologist, 57, 417-424.

[6] sometimes, theories that are denialist in tone are constructed to preserve certain desired outcomes from data that actually suggest otherwise. In other words, a narrative takes precedence over a more objective understanding. Charles Seife calls this a form of “proofiness“.

For more, please see: Seife, C. Proofiness: how you’re being fooled by numbers. Penguin Books (2011).

[7] Smolin, L.   The Trouble with Physics. Houghton-Mifflin (2006).

[8] Penrose, R., Shimony, A., Cartwright., N., and Hawking, S.   The large, the small, and the human mind. Cambridge University Press (1997).

[9] Fauconnier, G.   Methods and Generalizations. In “Cognitive Linguistics: foundations, scope, and methodology“. T. Janssen and G. Redeker, eds, 95-128. Mouton DeGruyter (1999).

[10] Confounds are a psychological concept that identifies when ideas and deep informed theories are confused or otherwise condensed for purposes of superficial understanding or misinterpretation. In the case of creationists, such intentional confounds are often used to generate doubt and confusion of subtle and complex concepts.

a) Role of confabulation in cognition (a theory): Hecht-Nielsen, R.   Confabulation Theory. Scholarpedia, 2(3), 1763 (2007).

b) Example of intentional confounding from anti-evolutionism: Moran, L.A.   A creationist tries to understand genetic load. Sandwalk blog, April 1 (2014).

[11] By “conforming to intuitive satisfaction”, I mean that Newtonian physics explains the physics of things we interact with on an everyday basis, and the Big Bang is consistent with the idea of divine creation (or creation from a singular point). This is not to say that these theories were developed because of these features, but perhaps explains their widespread popular appeal.

[12] Wholesale replacement of old deep, informed theories is explained in detail here: Kuhn, T.   Structure of Scientific Revolutions. University of Chicago Press (1962).

Logical Fallacy vs. Logical Fallacy

This content is cross-posted to Synthetic Daisies. To get the most out of this post, please review the following materials:

Alicea, B.   Informed Intuition > Pure Logic, Reason + No Information = Fallacy? Synthetic Daisies blog, January 4 (2014).

The peer-review committee for pure rationality. For more, please see [1].

new-banner

Awhile back, I posted some critiques of and modifications to the conventional approach to logical fallacies [1] on Synthetic Daisies blog. It seems as though every debate of the issues on the internet involves an accusation that one side is engaging in some sort of “fallacy”. This is especially true of topics of broader societal relevance, where the notion of logical fallacies has become entangled with denialism [2] and epistemic closure [3].

Social Media argumentation, one person’s take.

To recap (full version of the post here), I proposed that we replace six fallacies on the chart above and replace them with seven fallacies that are more inclusive of moral (e.g. emotional) and cultural biases. To me, the “Skeptic’s Guide to the Universe” model feels like a 12-step program of rationality. It may help you think in a desirable way (e.g. pure rationality). However, pure rationality does not provide you with a means to place conditions on an objective argument. The triumph of logical rigor ultimately becomes a straight-jacket of the mind, reducing one’s ability to think situationally.

Are the arbiters of deduction wrong on six counts?

Now it appears that I’m not alone in my concerns. Big Think now has a theme “The Fallacy Fallacy” on the fallacies of logical fallacies [4], with contributions from Alex BerezowJulia GalefDaniel Honan, and James Lawrence Powell

In this collection of essays and interviews, the overuse of logical fallacies itself is cited as a fallacy of composition, and provides better ways to construct arguments. These include several general observations related to the validity of reason itself. These transcend the popular “identify the fallacy” model.

One theme involves making the case for consensus through joint argumentation. Correct answers are not to be found via the most rigorous argument, but by exploring many complementary arguments, each with their own flaws.  

Another theme involves being mindful of cognitive biases such as confirmation bias or cultural preferences. Even when an argument is highly rigorous by the standards of logical consistency, they may still suffer from a lack of perspective. 

The third major theme involves the recognition that ignorance is a valid starting point [5] for many arguments. It is impossible to know everything about a topic, so any principled argument is bound to be incomplete. And the traditional fallacy model [6] is likely to make things worse.

NOTES:

[1] This is a list of 24 common logical fallacies, courtesy of Yourlogicalfallacyis.com (Jesse RichardsonAndy Smith, and Som Meadon). Also, most of these are individually found on Wikipedia with a more detailed explanation.

[2] Reinert, C.   Denialism vs. Skepticism. Institute for Ethics and Emerging Technologies blog, February 23 (2014).

[3] Cohen, P.   “Epistemic Closure”? Those are fighting words. NY Times Books, April 27 (2010).

[4] This is not a tautology! But it’s not the same thing as the formal version of the fallacy fallacy (a.k.a. argumentum ad logicam).

[5] Contrast with: Argument from Ignorance. RationalWiki.

[6] A nice resource for better understanding all possible logical fallacies: The Fallacy-a-Day-Podcast. A fallacy a day, in readable and podcast form.

Bitcoin Angst with an Annotated Blogroll

This content is being cross-posted to Synthetic Daisies.

new-banner

This post is about the crypto-currency Bitcoin. If you are interested in the technical aspects of Bitcoin (WARNING: highly technical Computer-Science and Mathematics content), read the following reference paper or check out the Bitcoin category on Self-evident blog . Otherwise, please read on. Citations:

Nakamoto, Satoshi   Bitcoin: A Peer-to-Peer Electronic Cash System. Internet Archive, ark:/13960/t71v6vc06.  (2009).

Friedkl, S.   An Illustrated Guide to Cryptographic Hashes. Steve Friedl’s Unixwiz.net Tech Tips. (2005).

Khan Academy: Bitcoin tutorial videos.

Being a techno-optimist (or realist, depending on which metric you use), I can’t help but be fascinated by the Bitcoin phenomena. I have an interest in Economics and alternative social systems, so the promise of Bitcoin is all the more attractive. Orthogonal Research is looking into the utility of the Bitcoin model (particularly the cryptographic hash function and wagering capabilities) for understanding the evolution and emergence of economic value.

I am generally skeptical of trends and propaganda. Therefore, once I learned that there are a finite number of Bitcoins in the world, I became unconvinced that Bitcoin could ever replace governmental currencies in the long-term. This inflexibility (which may have its roots in representative money and goldbug psychology) is one potential cause for periods of Bitcoin deflation (e.g. the value has gone up relative to real-world goods and services). This deflation has increased the hype of mining opportunities, as mining activity for high-valued Bitcoins resembles a gold rush. Conversely, Bitcoin is also vulnerable to bouts of severe inflation, which has occurred quite recently due to its use in major criminal rings and the downside of the Gartner hype cycle.

Trouble brewing on Mt. Gox! This is only temporary, though.

A lot of the Bitcoin hype is confusing to say the least. And it is not clear to me if Bitcoin mining is a totally above-board activity (this will be addressed in the articles at the end of this post). Nevertheless, Bitcoin is a significant step beyond virtual currencies such as Linden Dollars. This has been demonstrated by its interchange with conventional money and the trust a critical mass of people have placed in the currency. In addition, its cryptographic features may make Bitcoin (or something similar) a prime candidate as the currency of choice for secure internet transactions.

Below is an annotated bibilography of articles and blog posts on the phenomenon known as Bitcoin mining/trading and its libertarian underpinnings. In this discussion, I have noticed a pattern similar to the public discussion surrounding MOOCs. Much like MOOCs, technology people dominated the first few years of development, and the discussion was almost universally positive. After the initial hype, more critical voices emerged, usually from more traditional fields related to the technology. With MOOCs, these are University professors and instructors, but with Bitcoin the criticism is from financial and economics types.

Annotated Bibliography of Bitcoin: A diversity of viewpoints from academics and journalists, mostly critical. If you want a more blue sky view of Bitcoin, there are plenty of those on the web as well. Hope you find this educational.

1) Kaminska, I.   Wikicoin. Dizzynomics, December 7 (2013).

Proposes a Bitcoin-like system for adding value to Wikipedia, without relying on the rules of Wikipedia. No competition for CPUs, reward people for valuable contributions (rather than content by the word), and new coins create new resources.

2) Stross, C.   Why I want Bitcoin to die in a fire. Charlie’s Diary, December 18 (2013).

Bitcoin economy has a number of major flaws, including: high Gini coefficient (measure of economic inequality), prevalence of fraudulent behavior due to scarcity, use as a proxy for black market exchanges, mining is computationally expensive and encourages spyware and theft schemes.

3) 13) McMillan, R.   Bitcoin stares down impending apocalypse (again). January 10 (2014).

An article that discusses the distribution of Bitcoins (and hence inequality) among candidate miners. Read as a counterpoint to article (2).

4) Mihm, S.   Bitcoin Is a High-Tech Dinosaur Soon to Be Extinct. Bloomberg News, December 31 (2013).

A historical survey of private and fiat currencies, and how they work against central currencies. According to this view, Bitcoin represents the dustbin of history rather than the future of currency.

5) Krugman, P.   Bitcoin is evil. The Conscience of a Liberal blog, December 28 (2013).

A skeptical take on the viability of Bitcoin, and a primer on how Bitcoin is similar to a faux gold standard. Is Bitcoin a reliable store of value? Unlikely, given its recent performance and reputation.

6) Roche, C.   The Biggest Myths in Economics. Pragmatic Capitalism, January 8 (2014).

A refresher/primer on the theories (and mythical ideas) behind monetary policy and currency circulation. No explicit mention of Bitcoin but still relevant. Read along with article (5).

7) McMillan, R. and Metz, C.   Bitcoin Survival Guide: Everything You Need to Know About the Future of Money. Wired Enterprise, November 25 (2013).

Comprehensive overview of the Bitcoin enterprise, but nary a skeptical word. Describes the intentionally-designed upper limit on the number of Bitcoin that can circulate, as well as the cryptographic hash which enables transactions and discourages counterfeiting.

8) Yglesias, M.   Why I Haven’t Changed My Mind About Bitcoin. Moneybox, December 2 (2013).

Begins with an exchange of tweets regarding the counterfeiting protections afforded by Bitcoin. Additional discussion about how the currency can be used to evade national currency regulations.

9) Coppola, F.   Bubbles, Banks, and Bitcoin. Forbes, December 30 (2013).

Explores the notion of the “entanglement” of crypto- (e.g. Bitcoin) and state (e.g. Dollars, Euros, Yuan) currencies. If a private currency system is bailed out by public ones, we will end up with a situation like the Lehman Brothers bailout. Furthermore, the uncertainty of Bitcoin as a store of value will undermine the trustworthiness of the currency, which leads to other troubles.

10) Kaminska, I.   The economic book of life. Decmeber 31 (2013).

A blog post which follows up on the Forbes article by Coppola. Is Bitcoin a harbinger of the eventual “definancialization” of money? In the digital world, thousands of digital currencies might exist side-by-side. The connections between the futurist/extropian notion of “Abundance” and crypto-currencies are also explored.

11) Salmon, F.   The Bitcoin Bubble and the Future of Currency. Medium, November 27 (2013).

A historical and speculative take on the current Bitcoin bubble and the future of money. Is Bitcoin the future? Probably not, but may very well point the way ahead.

Hype vs. valuation: a Month-long comparison.

12) Authers, J.   Time to take the Bitcoin bubble seriously. FT.com, December 11 (2013).

Argues that Bitcoin is now a serious contender as a crypto-currency due to attention paid by Wall Street and major investment firms.

13) Liu, A.   Is it time to take Bitcoin Seriously? Vice Motherboard (2013).

A review of Bitcoin’s place in the contemporary social and financial landscape. Is it time to take Bitcoin seriously? Many people already are. Make points that are complementary to the discussion in (12).

14) Gans, J.   Time for a Little Bitcoin Discussion. Economist’s View, December 25 (2013).

A re-evaluation of one Economist’s view of Bitcoin. Very thoughtful and informative.

Inspired by a Visit to the Network’s Frontier….

This post has been cross-posted to Synthetic Daisies.

new-banner

Recently, I attended the Network Frontiers Workshop at Northwestern University in Evanston, IL. This was a three-day session in which researchers engaged in network science from around the world gathered to present their work. They also came from many home disciplines, including computational biology, applied math and physics, economics and finance, neuroscience, and more.

 

The schedule (all researcher names and talk titles) can be found here. I was among one of the first presenters on the first day, presenting “From Switches to Convolution to Tangled Webs” [1], which involves network science from a evolutionary systems biology perspective.

One Field, Many Antecedents

For many people who have a passing familiarity with network science, it may not be clear as to how people from so many disciplines can come together around a single theme. Unlike more conventional (e.g. causal) approaches to science, network (or hairball) science is all about finding the interactions between the objects of analysis. Network science is the large-scale application of graph theory to complex systems and ever-bigger datasets. These data can come from social media platforms, high-throughput biological experiments, and observations of statistical mechanics.

 

 The visual definition of a scientific “hairball”. This is not causal at all…..

25,000 foot View of Network Science

But what does a network science analysis look like? To illustrate, I will use an example familiar to many internet users. Think of a social network with many contacts. The network consists of nodes (e.g. friends) and edges (e.g. connections) [2]. Although there may be causal phenomena in the network (e.g. influence, transmission), the structure of the network is determined by correlative factors. If two individuals interact in some way, this increases the correlation between the nodes they represent. This gives us a web of connections in which the connectivity can range from random to highly-ordered, and the structure can range from homogeneous to heterogeneous.

Friend data from my Facebook account, represented as a sizable (N=64) heterogeneous network. COURTESY: Wolfram|Alpha Facebook app.

 Continuing with the social network example, you may be familiar with the notion of “six degrees of separation” [3].  This describes one aspect (e.g. something that enables nth-order connectivity) of the structure inherent in complex networks. Again consider the social network: if there are preferences for who contacts whom, a randomly-connected network results. The path between any two individuals in such a network is generally high, as there are no reliable short-cuts. This path across the network is also known as the network diameter, and is an important feature of a network’s topology.

Example of a social network. This example is homogeneous, but with highly-regular structure (e.g. non-random).

 

Let us further assume that in the same network, there happen to be strong preferences for inter-node communication, which leads to changes in connectivity. In such cases, we get connectivity patterns that range from scale-free [4] to small-world [5]. In social networks, small-world networks have been implicated in the “six degrees” phenomenon, as the path between any two individuals is much shorter than in the random case. Scale-free and especially small-world networks have a heterogeneous structure, which can include local subnetworks (e.g. modules or communities) and small subpopulations of nodes with many more connections than other nodes (e.g. network hubs). Statistically, heterogeneity can be determined using a number of measures, including betweenness centrality and network diameter.

Example of a small-world network, in the scheme of things.

Emerging Themes

While this example was made using a social network, the basic methodological and statistical approach can be applied to any system of strongly-interacting agents that can provide a correlation structure [6]. For example, high-throughput measurements of gene expression can be used to form a gene-gene interaction network. Genes that correlate with each other (above a pre-determined threshold) are consider connected in a first-order manner. The connections, while indirectly observed, can be statistically robust and validated via experimentation. And since all assayed genes (or the order of 103 genes) are likewise connected, second and third-order connections are also possible. The topology of a given gene-gene interaction network may be informative about the general effects of knockout experiments, environmental perturbations, and more [7].

This combination of exploratory and predictive power is just one reason why the network approach has been applied to many disciplines, and has even formed a discipline in and of itself [8]. At the Network Frontiers Workshop, the talks tended to coalesce around several themes that define potential future directions for this new field. These include:

 A) general mechanisms: there are a number of mechanisms that allow for the network to adaptively change, stay the same in the face of pressure to change, or function in some way. These mechanisms include robustness, the identification of switches and oscillators, and the emergence of self-organized criticality among the interacting nodes. Papers representing this theme may be found in [9].

 

The anatomy of a forest fire’s spread, from a network perspective.

 B) nestedness, community detection, and clustering: Along with the concept of core-periphery organization, these properties may or may not exist in a heterogeneous network. But such techniques allow us to partition a network into subnetworks (modules) that may operate with a certain degree of independence. Papers representing this theme may be found in [10].

C) multilevel networks: even in the case of social networks, each “node” can represent a number of parallel processes. For example, while a single organism possesses both a genotype and a phenotype, the correlational structure for genotypic and phenotypic interactions may not always be identical. To solve this problem, a bipartite (two independent) graph structure may be used to represent different properties of the population of interest. While this is just a simple example, multilevel networks have been used creatively to attack a number of problems [11].

D) cascades, contagions: the diffusion of information in a network can be described in a number of ways. While the common metaphor of “spreading” may be sufficient in homogeneous networks, it may be insufficient to describe more complex processes. Cascades occur when transmission is sustained beyond first-order interactions. In a social network, messages that gets passed to a friend of a friend of a friend (e.g. third-order interactions) illustrate the potential of the network topology to enable cascade. Papers representing this theme may be found in [12].

E) hybrid models: as my talk demonstrates, the power and potential of complex networks can be extended to other models. For example, the theoretical “nodes” in a complex network can be represented as dynamic entities. Aside from real-world data, this can be achieved using point processes, genetic algorithms, or cellular automata. One theme I detected in some of the talks was the potential for a game-theoretic approach, while others involved using Google searches and social media activity to predict markets and disease outbreaks [13].

Here is a map of connectivity across three social media platforms: Facebook, Twitter, and Mashable. COURTESY: Figure 13 in [14].

NOTES:

[1] Here is the abstract and presentation. The talk centered around a convolution architecture, my term for a small-scale physical flow diagram that can be evolved to yield not-so-efficient (e.g. sub-optimal) biological processes. These architectures can be embedded into large, more complex networks as subnetworks (in a manner analogous to functional modules in gene-gene interaction or gene regulatory networks).

One person at the conference noted that this had strong parallels with the book “Plausibility of Life” (excerpts here) by Marc Kirschner and John Gerhart. Indeed, this book served as inspiration for the original paper and current talk.

[2] In practice, “nodes” can represent anything discrete, from people to cities to genes and proteins. For an example from brain science, please see: Stanley, M.L., Moussa, M.N., Paolini, B.M., Lyday, R.G., Burdette, J.H. and Laurienti, P.J.   Defining nodes in complex brain networks. Frontiers in Computational Neuroscience, doi:10.3389/fncom.2013.00169 (2013).

[3] the “six degrees” idea is based on an experiment conducted by Stanley Milgram, in which he sent out and tracked the progression of a series of chain letters through the US Mail system (a social network).

The potential power of this phenomenon (the opportunity to identify and exploit weak ties in a network) was advanced by the sociologist Mark Granovetter: Granovetter, M.   The Strength of Weak Ties: A Network Theory Revisited. Sociological Theory, 1, 201–233 (1983).

The small-world network topology (the Watts-Strogatz model), which embodies the “six degrees” principle, was proposed in the following paper: Watts, D. J. and Strogatz, S. H.   Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442 (1998).

[4] Scale-free networks can be defined as a network with no characteristic number of connections across all nodes. Connectivity tends to scale with growth in the number of nodes and/or edges. Whereas connectivity in a random network can be characterized using a Gaussian (e.g. normal) distribution, connectivity in a scale-free network can be characterized using a Power Law (e.g. exponential) distribution.

[5] Small-world networks are defined by their hierarchical (e.g. strongly heterogeneous) structure and a short path length across the network. This is a special case of the more general scale-free pattern, and can be characterized with a strong power law (e.g. the distribution has a thicker tail). Because any one node can reach any other node in a relatively small number of steps, there are a number of organizational consequences to this type of configuration.

[6] Here are two foundational papers on network science [a, b] enlightening primers on complexity and network science [c, d]:

[a] Albert, R. and Barabasi, A-L.   Statistical mechanics of complex networks. Reviews in Modern Physics, 74, 47–97 (2002).

[b] Newman, M.E.J.   The structure and function of complex networks. SIAM Review, 45, 167–256 (2003).

[c] Shalizi, C.   Community Discovery Methods for Complex Networks. Cosma Shalizi’s Notebooks – Center for the Study of Complex Systems, July 12 (2013).

[d] Voytek, B.   Non-linear Systems. Oscillatory Thoughts blog, June 28 (2013).

[7] For an example, please see: Cornelius, S.P., Kath, W.L., and Motter, A.E.   Controlling complex networks with compensatory perturbations. arXiv:1105.3726 (2011).

[8] Guimera, R., Uzzi, B., Spiro, J., and Amaral, L.A.N   Team Assembly Mechanisms Determine Collaboration Network Structure and Team Performance. Science, 308, 697 (2005).

[9] References for general mechanisms (e.g. switches and oscillators):

[a] Taylor, D., Fertig, E.J., and Restrepo, J.G.   Dynamics in hybrid complex systems of switches and oscillators. Chaos, 23, 033142 (2013).

[b] Malamud, B.D., Morein, G., and Turcotte, D.L.   Forest Fires: an example of self-organized critical behavior. Science, 281, 1840-1842 (1998).

[c] Ellens, W. and Kooij, R.E.   Graph measures and network robustness. arXiv: 1311.5064 (2013).

[d] Francis, M.R. and Fertig, E.J.   Quantifying the dynamics of coupled networks of switches and oscillators. PLoS One, 7(1), e29497 (2012).

[10] References for clustering [a], community detection [b-e], core-periphery structure detection [f], and nestedness [g]:

[a] Malik, N. and Mucha, P.J.   Role of social environment and social clustering in spread of opinions in co-evolving networks. Chaos, 23, 043123 (2013).

[b] Rosvall, M. and Bergstrom, C.T.   Maps of random walks on complex networks reveal community structure. PNAS, 105(4), 1118-1123 (2008).

* the image above was taken from Figure 3 of [a]. In [a], an information-theoretic approach to discovering network communities (or subgroups) is introduced.

[c] Colizza, V., Pastor-Satorras, R. and Vespignani, A.   Reaction–diffusion processes and metapopulation models in heterogeneous networks. Nature Physics, 3, 276-282 (2007).

[d] Bassett, D.S., Porter, M.A., Wymbs, N.F., Grafton, S.T., Carlson, J.M., and Mucha, P.J.   Robust detection of dynamic community structure in networks. Chaos, 23, 013142 (2013).

* the authors characterize the dynamic properties of temporal networks using methods such as optimization variance and randomization variance.

[e] Nishikawa, T. and Motter, A.E.   Discovering network structure beyond communities, Scientific Reports, 1, 151 (2011).

[f] Bassett, D.S., Wymbs, N.F., Rombach, M.P., Porter, M.A., Mucha, P.J., and Grafton, S.T.   Task-Based Core-Periphery Organization of Human Brain Dynamics. PLoS Computational Biology, 9(9), e1003171 (2013).

* a good exampkle of how core-periphery structure is extracted from brain networks constructed from fMRI data.

[g] Staniczenko, P.P.A., Kopp, J.C., and Allesina, S.   The ghost of nestedness on ecological networks. Nature Communications, doi:10.1038/ncomms2422 (2012).

[11] References for multilevel networks:

[a] Szell, M., Lambiotte, R., Thurner, S.   Multirelational organization of large-scale social networks in an online world. PNAS, doi/10.1073/pnas.1004008107 (2010).

[b] Ahn, Y-Y., Bagrow, J.P., and Lehmann, S.   Link communities reveal multiscale complexity in networks. Nature, 466, 761-764 (2010).

[12] References for cascades and contagions:

[a] Centola, D.   The Spread of Behavior in an Online Social Network Experiment. Science, 329, 1194-1197 (2010).

[b] Brummitt, C.D., D’Souza, R.M., and Leicht, E.A.   Suppressing cascades of load in interdependent networks. PNAS, doi:10.1073/pnas.1110586109 (2011).

[c] Brockmann, D. and Helbing, D.   The Hidden Geometry of Complex, Network-Driven Contagion Phenomena. Science, 342(6164), 1337-1342 (2013).

[d] Glasserman, P. and Young, H.P.   How Likely is Contagion in Financial Networks? Oxford University Department of Economics Discussion Papers, #642 (2013).

[13] Reference for hybrid networks and other themes, including network evolution [a,b] and the use of big data in network analysis [c,d]:

[a] Pang, T.Y. and Maslov, S.   Universal distribution of component frequencies in biological and technological systems. PNAS, doi:10.1073/pnas.1217795110 (2012).

[b] Bassett, D.S., Wymbs, N.F., Porter, M.A., Mucha, P.J., and Grafton, S.T.   Cross-Linked Structure of Network Evolution. arXiv: 1306.5479 (2013).

[c] Ginsberg, J., Mohebbi, M.H., Patel, R.S., Brammer, L., Smolinski, M.S., and Brilliant, L.   Detecting influenza epidemics using search engine query data. Nature, 457, 1012–1014 (2008).

[d] Michel, J-B., Shen, Y.K., Aiden, A.P., Veres, A., Gray, M.K., Google Books Team, Pickett, J.P., Hoiberg, D., Clancy, D., Norvig, P., Orwant, J., Pinker, S., Nowak, M.A., Aiden, E.L.   Quantitative Analysis of Culture Using Millions of Digitized Books. Science, 331(6014), 176-182 (2011).

[14] Ferrara, E.   A large-scale community structure analysis in Facebook. EPJ Data Science, 1:9 (2012).

The Inefficiency (and Information Content) of Scientific Discovery

This content has been cross-posted to Synthetic Daisies.

 new-banner

In this post, I will discuss somewhat of a trendy topic that needs further critical discussion. It combines a crisis in replicating experiments with the recognition that science is not an perfect or errorless pursuit. We start with a rather provocative article in the Economist called “Trouble at the Lab” [1]. The main idea: the practice of science needs serious reform in its practice, from standardization of experimental replicability to greater statistical rigor.

While there are indeed perpetual challenges posed by the successful replication of experiments and finding the right statistical analysis for a given experimental design, most of the points in this article should be taken with a grain of salt. In fact, the conclusions seem to suggest that science should be run more like a business (GOAL: most efficient allocation of resources). This article suffers from many of the same issues as the Science article featured in my last Fireside Science post. Far from being an efficient process, the process of making scientific discoveries and discovering the secrets of nature require a very different set of ideals [2]. But don’t just rely on my opinions. Here is a sampling of letters to the editor which followed:

The first is from Stuart Firestein, the author of “Ignorance: how it drives science“, which is discussed in [2]. He argues that applying a statistician’s theoretical standards to all forms of data is not realistic. While the portion of the original article [1] discussing problems with statistical analysis in most scientific papers is the strongest point made, it also rests on some controversial assumptions.

The first involves a debate as to whether or not the Null Hypothesis Significance Test (NHST) is the best way to uncover significant relationships between variables. NHST is the use of t-tests and ANOVAs to determine significant differences between experimental conditions (e.g. treatment vs. no treatment). As an alternative, naive and other Bayesian methods have been proposed [3]. However, this still makes a number of assumptions about the scientific enterprise and process of experimentation to which we will return.

The second letter is refers to one’s philosophy of science orientation. This gets a bit at the issue of scientific practice, and how the process of doing science may be misunderstood by a general audience. Interestingly, the notion of “trust, but verify” does not come from science at all, but from diplomacy/politics. Why this is assumed to also be the standard of science is odd.

The third letter will serve as a lead-in to the rest of this post. This letter suggests that the scientific method is simply not up to the task of dealing with highly complex systems and issues. The problem is one of public expectation, which I agree with in part. As experimental methods provide a way to rigorously examine hypothetical relationships between two variables, uncertainty may often swamp out that signal. While I think this aspect of the critique is a bit too pessimistic, let’s keep these thoughts in mind…….

 

A reductionist tool in a complex world

Now let’s turn to what an experiment uncovers with respect to the complex system you want to understand. While experiments have great potential for control, they are essentially hyper-reductionist in scope. When you consider that most experiments test the potential effect of one variable on another, an experiment may serve no less of a heuristic function than a simple mathematical model [4]. And yet in the popular mind, empiricism (e.g. data) tends to trump conjecture (e.g. theory) [5].

Figure 1. A hypothesis of the relationship between a single experiment and a search space (e.g. nature) that contains some phenomenon of interest.

Ideally, the goal of a single experiment is to reliably uncover some phenomenon in what is usually a very large discovery space. As we can see in Figure 1, a single experiment must be designed to overlap with the phenomenon. This can be very difficult to accomplish when the problem at hand is complex and multi-dimensional (HINT: most problems are). A single experiment is also a relatively information-poor way to conduct this investigation, as shown in Figure 2. Besides being a highly-controllable (or perhaps highly reduced complex) means to test hypotheses, an alternate way to think about experimental design is as an n-bit register [6].

Figure 2. A single experiment may be an elegant way to uncover the secrets of nature, but how much information does it actually contain?

Now to get an idea of how such overlap works in the context of replication, we can turn to the concept of an experimental footprint (Figure 3). Experimental footprints qualitatively describes what an experiment (or it’s replication) uncovers relative to some phenomenon of interest. Let’s take animal behavior as an example. There are many sources of variation that contribute to a specific behavior. In any one experiment, we can only observe some of the behavior, and even less of the underlying contributing factors and causes.

A footprint is also useful in terms of describing two things we often do not think about. One is the presence of hidden variables in the data. Another is the effect of uncertainty. Both depend on the variables tested and problems chosen. But just because subatomic particles yield fewer surprises than human psychology does not necessarily mean that the Psychologist is less capable than the Physicist.

Figure 3. Experimental footprint of an original experiment and it’s replication relative to a natural phenomenon.

The original maternal imprinting experiments conducted among geese by Konrad Lorenz serve as a good example. The original experiments were supposedly far messier [7] than the account presented in modern textbooks. What if we suddenly were to find out that replication of the original experimental template did not work in other animal species (or even among ducks anymore)? It suggests that we may need a new way to assess this (other than chalking it up to mere sloppiness).

So while lack of replication is a problem, the notion of a crisis is overblown. As we have seen in the last example, the notion of replicable results is an idealistic one. Perhaps instead of saying that the goal of experimental science is replication, we should consider a great experiment as one that reveals truths about nature.

This may be best achieved not by the presence of homogeneity, but also a high degree of tolerance (or robustness) to changes in factors such as ecological validity. To assess the robustness of a given experiment and its replications (or variations), we can use information content to tell us whether or not a given set of non-replicable experiments actually yield information. This might be a happy medium between an anecdotal finding and a highly-repeatable experiment.

Figure 4. Is the goal of an experiment unfailingly successful replication, or a robust design that provides diverse information (e.g. successful replications, failures, and unexpected results) across replications?

Consider the case of an experimental paradigm that yields various types of results, such as the priming example from [1]. While priming is highly replicable under certain conditions (e.g. McGurk effect) [8], there is a complexity that requires taking the experimental footprint and systematic variation between experimental replications into account.

This complexity can also be referred to as the error-tolerance of a given experiment. Generally speaking, the error tolerance of a given set of experiments is correspondingly higher as information content (related to variability) increases. So just because the replications do not pan out, they are nonetheless still informative. To maximize error-tolerance, the goal of an experiment should be an experiment with a small enough footprint to be predictive, but a large enough footprint to be informative.

In this way, experimental replication would no longer be the ultimate goal. Instead, the goal would be to achieve a sort of meta-consistency. Meta-consistency could be assessed by both the robustness and statistical power of an experimental replication. And we would be able to sleep a little better at night knowing that the line between hyper-reductionism and fraudulent science has been softened while not sacrificing the rigors of the scientific method.

NOTES:

[1] Unreliable Research: trouble at the lab. Economist, October 19 (2013).

[2] Alicea, B.   Triangulating Scientific “Truths”: an ignorant perspective. Synthetic Daisies blog, December 5 (2012).

[3] Johnson, V.E.   Revised standards for statistical evidence. PNAS, doi: 10.1073/pnas.1313476110

[4] For more information, please see: Kaznatcheev, A.   Are all models wrong? Theory, Games, and Evolution Group blog, November 6 (2013).

[5] Note that the popular conception of what a theory is and what theories actually are (in scientific practice) constitutes two separate spheres of reality. Perhaps this is part of the reason for all the consternation.

[6] An n-bit register is a concept from computer science. In computer science, a register is a place to hold information during processing. In this case, processing is analogous to exploring the search space of nature. Experimental designs are thus representations of nature that enable this register.

For a more formal definition of a register, please see: Rouse, M.   What is a register? WhatIs.com (2005).

[7] This is a personal communication, as I cannot remember the original source. The larger point here, however, is that groundbreaking science is often a trial-and-error affair. For an example (and its critique), please see: Lehrer, J.   Trials and Errors: why science is failing us. Wired, December 16 (2011).

[8] For more on the complexity of psychological priming, please see: Van den Bussche, E., Van den Noortgate, W., and Reynvoet, B. Mechanisms of masked priming: a meta-analysis. Psychological Bulletin, 135(3), 452-477 (2009).