> “SANDHOGS,” THEY CALLED THE LABORERS who built the tunnels leading into New York’s Penn Station at the beginning of the last century. Work distorted their humanity, sometimes literally. Resurfacing at the end of each day from their burrows beneath the Hudson and East Rivers, caked in the mud of battle against glacial rock and riprap, many sandhogs succumbed to the bends. Passengers arriving at the modern Penn Station—the luminous Beaux-Arts hangar of old long since razed, its passenger halls squashed underground—might sympathize. Vincent Scully once compared the experience to scuttling into the city like a rat. Zoomorphized, we are joined to the earlier generations.
This goes on for about seven paragraphs before I have any idea what the article about. I understand “setting the scene” but I can’t tell whether or not to care about an article if it meanders about with this flowing exposition before indicating what its central thesis is.
It seems like a popular style in thinkpieces and some areas of journalism. The author makes a semi-relevant title, provacative subtitle, and five - ten paragraphs of “introduction” that throw you right into the thick of a story whose purpose doesn’t seem clear unless you know what the article is about. Rather than capturing my attention with engaging exposition, I find it takes me out of it. But it must work if it’s so uniquitous; presumably their analytics have confirmed this style is engaging.
"But, I explained to my work colleagues as the Princeton local pulled out from platform eight and late-arriving passengers swished up through the carriages in search of empty seats, both the original Penn Station and its unlovely modern spawn were seen at their creation as great feats of engineering."
I had to highlight between the commas to get through that one.
The content need not be true, but at least everyone will be happy with their preferred writing styles....
If I could read fiction that is written exactly for me I would love it. And as for "non-fiction", I reference check any particularly interesting claim anyway, so I'd be happy to try and use the AI for that too. The way I see it, reading is much more about exercising the brain in thinking about new things than about learning new facts.
And yet a world of Pop Tarts is sooooooo boring... And no one makes heart stoppingly good fish stew using Pop Tarts.
This fella may not have written the best piece of the week, we may not remember this piece tomorrow - but I think that the fact that he's attempting to create something gives him a chance of actually getting there. Looking at a dashboard completely kills that in my opinion.
Screw the stats! Make what you think is good !
IMO the author makes some very valid points about fuzzy products and endpoints in the current AI/data/ML/magic craze. These are under-articulated elsewhere, because, well hey there's a lot of money flowing! Who wants to be a killjoy and not "get it" (just like in 1999 ;)?
Two more specific points: 1. The descriptions of the CEO are eerily familiar to me. This guy is almost an archetype. Reminds me of a person I've worked with in that role who was also associated with a similar-ish company. It really paints the con-game side of all this.
2. A deeper point (and worth the read for me) was the author's thinking about how all this didn't fit existing needs and workflows and then has a chilling thought: "It’s possible that the market for a user-hostile data system that inaccurately predicts the future and turns its human operators into automatons exists after all, and is large." You can make an argument that this kind of thing has already happened in modern customer service and, with greater negative impact, in healthcare. I.e. where the tail of easy metrics and saleable endpoints ends up wagging the dog of quality.
There's a meme going round about how the best way to refute an argument is to 'steelman' it: present the best arguments of the opposing side before refuting them. He doesn't do that here, which is one of the reasons I found it frustrating.
I agree that the way the venture raising market works today rightfully deserves some fair criticism.
These people were eating VC hype money to build Hagbard's FUCKUP from the Illuminatus! Trilogy. [2]
Not sure who I feel more sorry for. The smart employees wasting years of their prime chasing some unattainable pipe dream, the VC's who got suckered into pouring their money into some vaporware precog technology, the author trying to disguise a shit river with meandering prose, or my upcoming pay cut when the AI winter sets in.
[1] https://en.wikipedia.org/wiki/Fucked_Company
[2] First Universal Cybernetic-Kinetic-Ultramicro-Programmer (FUCKUP). FUCKUP predicts trends by collecting and processing information about current developments in politics, economics, the weather, astrology, astronomy, the I-Ching, and technology.
Some of PreData's recent "insights":
"China Trade War Fears Still Running High"
"Mall Blaze Sparks Outrage Across Russia"
In short, nothing that couldn't be revealed from the briefest skim of headlines from tomorrow morning's WSJ.com. One can stay better informed leaving a Bloomberg TicToc (which is partially machine generated) tab open all day.
My takeaway is that the world of the Jim Shinns is rapidly approaching extinction. Deals done poolside at country club dinner dances. Name game shmoozing. And serendipitous encounters on private islands. What was considered the predominant pathway to immortality in Fitzgerald's day.
Viable alternatives exist now. And any business model solely differentiated by prestige will be subsumed by free or near-free competition.
> Three months later, Predata secured a second round of venture capital funding.
People like Jim Shinn will always find a way. At least that's the argument the author seems to be making.
That's a really embarrassing mistake.
But I found this Sunday AM read enjoyable, articulate, and largely on-point (overlooking a few minor scientific errors).
The core themes here are about the hubris of a rich CEO/founder, the zaniness of the current AI "market," and their resultant effect on a particular NYC startup.
This is a season of "Silicon Valley" (HBO) done east-coast, hedge fund, Ivy League style.
I'd be shocked if anyone in the industry hasn't worked for or with a Jim. Spot-on.
Technology without vision is dehumanizing - it happened with Penn Station, where narrow quantitative and engineering goals displaced the broader human ones and led to the widely-hated station that's there now, which was excavated by people who were called hogs, and which makes passengers feel like rats. The loss is especially acute there, since everybody knows what the old station was like ( https://duckduckgo.com/?q=old+penn+station&kp=-2&iax=images&... ). It was an edifice comparable to the great gares and bahnhöfe of Europe (or to Grand Central which for some reason we decided to keep), a monument to national power, industrial wealth, and the technologies of the time, but also a space that evoked something a little more noble in the human spirit somehow.
The writer is also drawing a parallel with the dehumanizing effect of the particular startup he worked for. The analysts are the hogs, he's the rat, his own perceived loss of creativity (probably a bit exaggerated... aahhh youth) is the dehumanization part, and the absentee CEO is the lack of vision. (If a CEO has one function, it's to provide vision. And in second place, not far behind, is to establish company culture.)
Arguably, placing technical/quantitative goals above more humanistic ones is what an organization like Nazi Germany was all about. But obviously it's way more complicated than that, and I don't intend to address it further.
I would point you toward Dmitri Orlov's concept of a Technosphere. Analogous to the "biosphere" it models human technology as a quasi-intelligent entity that is global in scope.
Book: https://www.amazon.com/Shrinking-Technosphere-Technologies-A...
Excerpt (not much exposition but you'll get the point): https://cluborlov.blogspot.com/2016/02/the-technospheratu-hy...
Everybody here are the ones who most need to hear this message. Some will doubtless resist the criticism of ML/datasci with the fervor of someone whose long-held religious belief is challenged for the first time. But you needed that. Feel free to prove the critiques wrong, by the way... that's kind of the whole point. Prove them wrong with broad projects that actually benefit humanity instead of being a mess of unintended consequences and unimpressive bullshit.
He is right about his claim of having no right to be called a “director of research", as it seems to me his skills center on cribbing thoughts pulled from other people's thinkpieces. It's clear that he doesn't have a deep background in either neuroscience or engineering and that he was brought to the company from a background in business journalism.
In his condemnation of the state of AI research, there is no mention of AlphaGo, or a description of the teachable pattern recognition techniques that have swept the deep learning scene over the last 6 years.
I'm sorry to be so harsh, but there is a certain tone to this piece, "let's hate all those startup a*holes", "Mark Zuckerberg can't write like F Scott Fitzgerald because his knowledge of liberal arts is too limited, unlike mine" that seems like a snooty class signaler among a certain hipster set.
There is a compelling story in here, but to me the general attitude is just condescending to everyone around him.
[0] https://aeon.co/essays/your-brain-does-not-process-informati...
Even without extrapolating from the pattern recognition tools we have today, whole classes and ranges of jobs can be fully or partially eliminated.
Here is what he says about the state of AI:
> Even the most eye-catching successes claimed for AI in recent times have been, on closer inspection, relatively underwhelming. The idea that an autonomous superhuman machine intelligence will spontaneously spring, unprogrammed, from these technologies is still the stuff of Kurzweilian fantasy. Forget Skynet; at this stage it’s not certain we’ll even get to Bicentennial Man.
> These techniques might replicate discrete functions of a human mind, but they cannot capture the mind’s totality or what makes it unique: its creativity, its genius for emotion and intuition. There’s something else going on."
"the brain has spiritual magic"
Compare that to quotes from real live top human Chinese Go players defeated by AlphaGo last year:
> “After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”
? “AlphaGo has completely subverted the control and judgment of us Go players,” Mr. Gu, the final player to be vanquished by Master, wrote on his Weibo account. “I can’t help but ask, one day many years later, when you find your previous awareness, cognition and choices are all wrong, will you keep going along the wrong path or reject yourself?”
I had a brief argument with Robert Epstein (the author of that article) because I find the argument that humans don't actually store information to be quite misleadning and missing the point.
The most obvious mistake is this:
"We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not."
This is both true and false. It's true we don't do that like a computer but its wrong to claim that computers fundamentally do that too.
That happens several layers of abstraction up and so a computer fundamentally doesn't actually store an image or a word either it manipulates atoms and turns circuits on and off and several layers of abstraction up it gets translated into meaning first by machines then by humans.
Anyone who have a hard time believing machines can become sentient should first ask themselves why they have a harder time believing that than accepting that dumb immaterial matter somehow have become the pattern recognizing feedback loops that are us.
This argument deserves much more recognition than it gets because 1. it's at this point still empirically true, we have not observed non-organic sentient life and more importantly, because not Searle but everybody else employs 'magic'.
Searle's point is simple. Computation is subjective. Electricity flowing through a machine doing complex things is just a physical process like anything else. You (the sentient observer) classify that physical process as meaningful, but a computer is no more 'computing' things than a falling pen computes gravity.
So sentience really is related to physical agency and sensory experience in the world, which creates conscience in organic brains. That doesn't imply complexity or intelligence or understanding. Syntax and Semantics are different things. Your pocket calculator processes the syntax of mathematics, but it does not understand the semantics of mathematics. A compiler processes symbols according to rules, but it does not understand the meaning of the computation, it has no cognition. It might be very good at what it does, but it has no capacity to understand. That's the essence of the Chinese room, and it's still a convincing argument.
An even stronger point might be made, namely that sentience actually limits intelligence. That it requires a degree of slowness and introspection that is unsuited for fast decision-making. For a fictional treatment of this, Blindsight by Peter Watts is an excellent read.
How about the brain creates information from constant interaction with the world based on the kinds of bodies we have and our needs/wants? This information doesn't exist as information until the brain creates it. Information is the product of minds. It doesn't exist in the world on it's own to be processed. As such, the brain is something other than a computing device. Computers exist because we figured out how to arrange physical systems to process information that's meaningful to us. But to nature, it's just a physical system (and not even that, since physics is a model of nature we create).
That's Jaron Lanier's paraphrased argument against thinking of the brain as a computer. To say that information exists in the world to be processed is to make a metaphysical commitment that information exists ready made for us.
> and there being something magical about human brains that cannot be simulated
It doesn't have to be magical. There are different philosophical views on the world and the mind which lead to different conclusions. If one takes the hard problem of consciousness seriously, then consciousness cannot be computed. Not because of magic or the supernatural, but just because consciousness is not computable, since computation is itself an abstraction (Turing machines don't exist on their own anymore than do any other mathematical systems). Unless your metaphysics falls along the lines of Tegmark, Plato or Wheeler (it from bit).
Instead you can think of The brain as an information creator. We give meaning to the world. We build models. The world itself just is, it's not information, math, physics or symbols.
What is true is that nobody has done it yet. The process is a mystery in the sense that it's not understood, which means that we don't know if it's computable or not.
This time round there will be no million fold increase in compute power to bail everyone out!
This is a perfect summary of the VC situation today. Too much money chasing no-one knows what exactly, but they're sure they'll know it when they see it.
"... judge the merit of a new idea in AI according to the perceived intelligence of its developers."
about information technology VCs and AI is just totally wrong: I don't believe VCs do that. Why? Generally, from 50,000 feet up, it's too far from the norms of the accounting, banking, and investing communities respected by the limited partners of the VCs. Uh, the limited partners (LPs) are where the VCs get nearly all the money they invest, and the limited partners are conservative people, managers of pension funds, university endowments, etc. Not only do the VCs not do that, the LPs won't let the VCs do that!
Instead, about the shortest believable view I can see is, VCs look for traction that is significant and growing rapidly in a market large enough to permit a company worth $1+ billion in a few years.
The VCs view of traction is a weakening of the usual measures the accounting, banking, and investing communities use and respect of audited revenue and earnings.
So, sure, the best form of traction will be earnings, then next best, revenue, next best lots of interested customers, e.g., advertisers willing to pay for eyeballs, then last best, just lots of eyeballs. In these norms, intelligence, brilliance, AI, technology, etc. are mostly publicity points, window dressing, the wrapping paper on a birthday gift, and with a dime won't cover a 10 cent cup of coffee.
In a sense, the VCs have a good point, more from insight into humans and the real world than anything in a pitch deck: (1) With technology, it's too easy to push totally meaningless, useless BS. (2) Carefully studying core, deep, difficult technology is just too darned difficult to be practical for the VCs.
Or the investors believe in a Markov assumption: The future of the business and the technology from the past are conditionally independent given the current traction, its rate of growth, and the size of the market. To be clear, this Markov assumption does not say that the technology and the future of the company are independent.
The stories in the OP about the company Predata, to abbreviate "predictions from data", are good: The company was floundering around with guesses about what would work, e.g., for predicting terrorist attacks, that were like something from smoking funny stuff.
But here is one big place the VCs and technology are going wrong: We do have some terrific examples of how to do well. The examples are from the past 70+ years of the unique world class, all-time, unchallenged grand champion of using advanced, even original, technology for important practical results -- the US DoD.
A grand example is GPS. GPS was by the USAF, but it was a refinement of an earlier system by the US Navy, for navigation for the missile firing submarines and started at the Johns Hopkins University Applied Physics Laboratory JHU/APL. At one time I worked in the group that did the original work and heard the stories. A key point: The original proposal was by some physicists and almost just on the back of an envelope. Soon the project was approved and pushed forward with a lot of effort. Then, presto, bingo, it all worked just as predicted on the back of the envelope. E.g., a test receiver on the roof navigated its position within one foot, plenty accurate enough for the US Navy.
So, net, for project selection and funding, here is the shocking, surprising, point that the VCs miss: Really, given the back of the envelope work, the rest was relatively routine and low risk.
And the past 70+ years of the US DoD is awash in comparable examples.
In blunt terms, the US DoD has a fantastically high batting average on far out projects evaluated just on paper. Given good evaluations of the work just on paper, the rest is relatively routine and low risk.
Well, that project funding technique does not fully solve the problem of the VCs: The VCs also need to know that the resulting product will have big success in the market. But for that there is an okay approach: The dream product would be one pill taken once, cheap, safe, effective, to cure any cancer. In that case, the technology is so good for such an important practical problem in such a large market that there's no question about making the $1+ billion. So, from this hypothetical lesson, net, need the technology to be the first good or a much better solution, a "must have", for a really pressing problem in a big market. So, right, this filter would reject Facebook, Snap, and more. So, right, need to start with a really big problem where with new technology, say, as in the US DoD examples, can get a "must have" solution for a really big problem, and Facebook and SNAP are not such problems. Just what are such problems? That's part of the challenge. But with current VCs, come up with such a problem and a solution on paper, with brilliant founders, with AI, etc., then still will need more than a time to cover a 10 cent cup of coffee. Again, to get VCs up on their hind legs, bring then good data on traction, significant and growing rapidly in a large market; if the secret sauce technology helps, fine; brilliant founders, fine; even if there is no technology, fine; in all cases, what really matters is the traction.
And do you have a reference for the "fantastically high batting average" of US DoD research? Are you familiar with the SBIR program, for example?
I would judge that neither DoD/DARPA nor VCs have a great batting average. But both have some spectacular wins.
I guess I'm not in their target market then because it reads like a hit piece - so much so that I was sure that all the names were changed!
The author is out of his damn mind for not changing names, but NMP.