Professional Documents
Culture Documents
I like to say we are a little data company. Little data is when small firms or
individuals produce and analyze their own unique and proprietary data at a lower
order of magnitude than large organizations.
An example would be one of our products in development that we call our Doomsday
App, with the official and unwieldy name of Global Asset Bubble and Financial
Crisis Prediction Application.
Were still forging the user experience, but basically it breaks down big government
data into digestible snippets or events. Then it runs the event through a prediction
engine. That aggregated output gets a wash through R Studio for analysis and
visualization. (I cant say enough about how much I love R Studio). Users will be able
to plot financial contagion or bubbles on Google Earth or other mapping API in real
time. We think its more forward-looking than similar products that rely on dated or
backward-looking economic indicators.
At this point you should be thinking:
a) Im skeptical of GovBrain and their voodoo claims.
b) If Eastwood ever publishes an academic article on this, Im going to replicate and
find flaws in his methodology.
And that is exactly what I want you to do if I can get this published some day in an
academic journal once my patent attorney and board of directors allow it. If all goes
bad with GovBrain, at least I can share our specific research methods with the data
science community someday.
As theorists, we want to give others a chance to prove us wrong. I think our
methodology on this may be parsimonious and adhere to Occams Razor, but others
will probably disagree with some of our assumptions, rules, results, or conclusions.
They may even say the whole thing is trivial.
I think this discourse and critical analysis among peers in data science is very
important, especially when it comes to model building and testing theory and
hypotheses. Now that may be difficult if you are working on a commercial endeavor.
Your boss or CEO probably did not hire you to produce scholarship, but you can still
try to get a peer review in other ways.
At GovBrain, we wanted to get prove us wrong feedback from not only PhDs but
also recent grads. We talked to a young computer scientist whose father helped
develop the R Project and a newly-minted data scientist who had a technical business
major with plenty of instruction through the Johns Hopkins data instructional track
from Coursera.
They had the right amount of skepticism, but were interested in learning more, so we
encouraged additional toughlove critique. Many of you are already doing this type of
peer review, but start-ups or big data companies may not be. I hope we can continue
this knowledge discovery with the appropriate prove me wrong approach
throughout the data science community, so we can all produce the best commercial
product or scholarship possible.
Forecasting International Events will be held at George Washington Universitys
Elliot School of International Affairs on April 30th at 6:30p.