I am not.
From the energy efficiency perspective human brain is very, very effective computational machine. Computers are not. Thinking about scale of infrastructure of network of computers being able to achieve similar capabilities and its energy consumption... it would be enormous. With big infrastructure comes high need of maintenance. This is costly and requires a lot of people just to prevent it from breaking down. With a lot of people being in one place, there socioeconomical cost, production, transportation needs to be build around such center. If you have centralized system, you are prone to attack from adversaries. In short I do not think we even close to what author is afraid of. We just closer to beginning to understand what is the need to actually start to think about building AI - if ever possible at all.
That said, the article doesn't assume such a thing will happen soon, just that it may happen at some time in the future. That could be centuries away - I would still argue the end result is something to be concerned about.
Can you explain why you think that? Very often, mechanical efficiency outperforms biological. Humans have existed for thougsands of years, neurons even longer. Computers and AI and relatively recent, we haven't really begun to explore optimisation possibilities.
The amount of parallelism in the human brain is enormous. Not just each neuron, but each synapse has computational capacity. That means ~10^14 computational units or 100 trillion processing units -- on about 20 watts.
That doesn't even touch the bandwidth issues. Getting the sensory input in and out of the brain plus the bandwidth to get all of the processing signals between each neuron is at least another petabit per second. So, on bandwidth capacity alone we are 25+ years away (assuming the last 25 years of growth continues). And in humans that comes with 18 years of training at that massive bandwidth and computational power.
Also, we have no idea what a general intelligence algorithm looks like. We are just now getting multimodal LLMs.
From the computational/bandwidth perspective we are still 30 years from a computer being able to process The information a single human brain does, except while consuming 29+ megawatts of energy. If you had to feed a human 29 megawatts worth of power no business would be profitable. Humans wouldn't even survive.
Sorry, but the notion that we are close to AGI because we have good word predictors is fantasy. But, there will be some amazing natural language human-computer interface improvements over the next 10 years!
So you're questioning the above comment's argument based on a hand-wavy claim about completely speculative future possibilities?
As it stands, there's no disagreeing with the human brain's energy efficiency for all the computing it does in so many ways that AI can't even begin to match. This to not even speak of the whole unknown territory of whatever it is that gives us consciousness.
About the substance, I agree that there are fair grounds for concern, and it's not just about mathematics.
The best case scenario is rejection and prohibition of uses of AI that fundamentally threaten human autonomy. It is theoretically possible to do so, but since capital and power are pro-AI[^1], getting there requires a social revolution that upends the current world order. Even if one were to happen, the results wouldn't last for too long. Unless said revolution were so utterly radical that would set us in a return trajectory to the middle ages (I have something of the sort published somewhere, check my profile!).
I'm an optimist when it comes to the enabling power of AI for a select few. But I'm a pessimist otherwise: if the richest nation on Earth can't educate its citizens, what hope is there that humans will be able to supervise and control AI for long? Given our current trajectory, if nothing changes, we are set for civilization catastrophe.
[^1]: Replacing expensive human labor is the most powerful modern economic incentive I know of. Money wants, money gets.
And I am envy of such skill because I like to think about myself as not entirely being stupid, still I would never be able to write/speak this way because I just do not have an aptitude towards that.
So I don't see any reason to worry about the impact of AI. Unlike most fields with AI worries, mathematical research isn't even a significant employment area, and people with jobs doing it could almost certainly be doing something else for more money.
Given that kind of picture of reality, it is little wonder that AI seems like such a profound threat to so many people (putting aside for the moment the distinction between the aspirations of AI companies and the actual affordances it possesses). If being human is to be an economic instrument, then any AI that could eliminate the economic value of human beings is something akin to extinction. The god of economics has no further need of you. You may die now.
But this utilitarian view of the world reeks of nihilism. It is the world of total work, of work for work's sake. We never inquire about the ends that are the very reason for work in the first place. We never come to an understanding that economies exists for us, that we create them for mutual benefit. And we never seem to grasp that the economic part of human life is only part of human life, that it exists for the sake of those parts of life, the more important and most important parts of life, that are not a matter of economics. We have come to view life as meaningless, so we run into the embrace of the god of economics, losing ourselves in its endless churn, its immediate goals, truncating our minds so that we do not conceive of anything else, longing to escape the horror of the abyss that awaits us outside of its dreary confines...
The point of studying something in a theoretical capacity is to understand it, not to produce something of economic value. Each person must come into understanding from a state of not understanding. Homo economicus does not comprehend this. Homo economicus lives to eat and shit and cum and to accumulate things.
This framework is explicitly enforced by copyright law. Because a copyright monopoly is automatically granted to every content creator, every person is automatically expected to participate in the copyright system.
Copyright law hinges on incompatibility. The easier it is to make compatible work, the easier it is to make derivative work, which copyright defines as the penultimate evil.
Generative statistical models (what everyone is calling AI) are calling this bluff harder than ever. Derivative work is easier than any time in history.
So what do we do about it? It's pretty obvious from my perspective that the best move forward is to eliminate copyright for everyone. It seems instead, that the most likely outcome is to eliminate copyright exclusively for the giant corporations that successfully launder their collaboration (derivative work) through large generative models.
But it did. Painter used to be a trade where you could sell your painting skills as, well, a skill applicable for other than purely aesthetic reasons, simply because there were no other ways to document the world around you. It just isn't anymore because of cameras. Professional oil portrait painter isn't a career in 2025.
Source? If anything I suspect there are more people making a living as painters now than at any point in history.
Is running an art/vocation comparable to photography and/or painting? We no longer have mailmen who run the length of the country afaik.
But running did heavily contribute to sedentary lifestyles in western countries, along with a bunch of other things.
> mathematical research isn't even a significant employment area
I agree, I think it will move from mathematicians "doing" math, to managing computerised system that do it instead. I'm sure we already have such systems.
I think far more important to humanity is improving mathemetical-literacy. From my perspective, math is made for mathematicians - it could be more accesible. As "pure" amth matures, there is still plenty opportunity in "applied" math (however you might define it).
The state space of mathematics is pretty different from chess, but I think ultimately, mathematicians are just running something like A* on the space of propositions, with a custom heuristic that is learned by approximating the result of running A* with that heuristic. where your error is just the difference between the actual and predicted length of proof.
This is a somewhat bleak picture of math. We also have the other phenomena of increasing simplicity. Both statements and proofs becoming more straightforward and simple after one has access to deeper mathematical constructions.
For example : Bezout's theorem would like to state that two curves of degree m, degree n would intersect in mn points. Except that you have two parallel lines intersecting at 0 instead of 1.1 =1 point, two disjoint circles intersect at 0 instead of 2.2=4 points, a line tangent to a circle intersecting at 1 point instead of 1.2=2 points. These exceptions merge into a simple picture once one goes to projective space, complex numbers and schemes. Complex numbers lead to lots of other instances of simplicity.
Similarly, proofs can become simple where before one had complicated ad-hoc reasoning.
Feynman once made the same point of laws of physics where in contrast to someone figuring out rules of chess by looking at games where they first figure out basic rules(how pieces move) and then moves to complex exceptions(en passant, pawn promotion), what often happens in physics is that different sets of rules for apparently distinct phenomena become aspects of a unity (ex: heat, light, sound were seen as distinct things but now are all seen as movements of particles; unification of electricity and magnetism).
Of course, this unification pursuit is never complete. Mathematics books/papers constantly seem to pull a rabbit out of a hat. This leads to 'motivation' questions for why such a construction/expression/definition was made. For a few of those questions, the answer only becomes clear after more research.
I think you need to be careful taking about "infinite" in the context of math. If the number of quantities, relationships etc is finite, so are all their combinations. Even things like the infinit-ude of available numbers might have fixed patterns that render their relevant properties effecively finite, and lead to further distinctions e.g finite vs countable, etc.
Personally, I feel like math has a bit of a legacy problem. It holds on to the conventions of an art that is very old, with very different initial assumptions at its conception, and this is now holding it back somehow. I lack the background to effectivly demonstrate this other than "Things I know/understand seem less intutive in standard mathenatical terms" e.g. generating functions and/or integrals feel easier to understand (to me) when you understand the, to be software-like 'loops'.
In fact, the idea of "constructivist math" seems (again, to me) to beg for a more algorithmic/computational approach.
Could you expand on this? I don't see maths as a language for quantities specifically (i.e. what does symmetry have to do with quantities).
> just too tedious (but not impossible) for a human being to work through the proof.
Already happened with the four colour theorem arguably.
That’s easily proven to be true. “Two plus two equals four” is a theorem, so is “three plus three equals six”, etc.
Mathematics is just proof-driven development. For an spectator it might look like mathematics is about writing proofs, but that's not different than seeing a software developer write a lot of tests. The proofs are the best tools against insidious logic bugs that the society of mathematics has come up with in the last few hundred years. Mathematicians would welcome automating all the proofs, just like software engineers are happy for code assistants to take over the task of writing tests.
Also:
> To expand: what if the practice of mathematics becomes completely determined by the diktats of a vast capitalist machinery of proprietary machine learning models churning out proof after proof, and theory after theory, conjured from the aether of all possible true statements?
I don't think that this is possible even in theory, as computational resources are limited and "the aether of all possible true statements" is incomprehensively vast. (There's a massive orders-of-magnitude difference in size between true-seeming-yet-false statements and the number of elementary particles in the visible universe. More statements than particles.) You can't brute force it.
I agree, but... Spend time formalising a large part of existing mathematics and proofs, train a bunch of sufficiently powerful and generative models with that, and with cooperative problem solving and proof strategies, and give them access to proof assistants and adequate compute resources, and something interesting could happen.
I suspect the barrier is finding a business model that would pay for this. Turning mathematics into an industrial, extruded-on-demand product might work, but I dont know who (except maybe the NSA) would stump-up the money.
Rejecting a proof would be more complicated, because while for confirming a proof you only need to check that the main statement in the formalisation matches that of the conjecture, showing that a proof has been rejected requires knowledge of the proof itself (in general).
This could lead to the proof being rejected entirely, or fixed and strengthened.
Confirmation: if the AI understands it well enough that we're even considering asking it to confirm the proof, then you can do all kinds of things. You can ask it to simplify the entire proof to make it easier for humans to verify. You can ask it questions about parts of the proof you don't understand. You can ask it if there's any interesting corollaries or applications in other fields. Maybe you can even ask it to rewrite the whole thing in LEAN (although, like the author, I know nothing about LEAN and have no idea if this would be useful).
Such libraries would need documentation, or nobody would know when to use them, and then sharing is pointless.
If corporations build them, they would have to decide what to contribute to the commons and what to keep private. But that’s no different than any other language.
Also, I appreciate anonymity, but, to my point
> I live by myself in a remote mountain cave beyond the ken of civilised persons, and can only be contacted during a full moon, using certain arcane rites that are too horrible to speak of.
Okay.
= I live in California, and the nearest Starbucks is more than 20 miles away.
>>can only be contacted during a full moon
= As a night person, I am awake when the streetlight outside my house turns on.
>>certain arcane rites that are too horrible to speak of
= In order to contact me, you must install Microsoft Teams.
Overall, it's not that bad, except for the MS team thing. ;)
https://en.wikipedia.org/wiki/Alexander_Grothendieck#Retirem...
"Local villagers helped sustain him with a more varied diet after he tried to live on a staple of dandelion soup." - like most people would.