To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
We "know" it's true, but only because a machine ground mechanically through lots of tedious cases. I'm sure most mathematicians would appreciate a simpler and more elegant proof.
However by the end of that period, it seemed to transition to a situation where the most important skill in achieving a good score was manipulating statistical machine learning tools (Random Forests was a popular one, I recall), rather than gaining deep understanding of the physics or sociology of the problem, and I started doing worse and I lost interest in Kaggle.
So be it. If you want to win, you use the best tools. But the part that brought joy to me was not fighting for the opportunity to win a few hundred bucks (which I never did), but for the intellectual pleasure and excitement of learning about an interesting problem in a new field that was amenable to mathematical analysis.
"Mathematics advances by solving problems using new techniques because those techniques open up new areas of mathematics."
For actual research mathematics, there is no reason why an AI (maybe not current entirely statistical models) shouldn't be able to guide you through it exactly the way you prefer to. Then it's just a matter of becoming honest with your own desires.
But it'll also vastly blow up the field of recreational mathematics. Have the AI toss a problem your way you can solve in about a month. A problem involving some recent discoveries. A problem Franklin could have come up with. During a brothel visit. If he was on LSD.
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
In mathematics it is just as (if not moreso) important to be able to apply techniques used to solve novel proofs as it is to have the knowledge that the theorem itself is true. Not only might those techniques be used to solve similar problems that the theorem alone cannot, but it might even uncover wholly new mathematical concepts that lead you to mathematics that you previously could not even conceive of.
Machine proofs in their current form are basically huge searches/brute forces from some initial statements to the theorem being proved, by way of logical inference. Mathematics is in some ways the opposite of this: it's about understanding why something is true, not solely whether it is true. Machine proofs give you a path from A to B but that path could be understandable-but-not-generalizable (a brute force), not-generalizable-but-understandable (finding some simple application of existing theorems to get the result that mathematicians simply missed), or neither understandable-nor-generalizable (imagine gigabytes of pure propositional logic on variables with names like n098fne09 and awbnkdujai).
Interestingly, some mathematicians like Terry Tao are starting to experiment with combining LLMs with automated theorem proving, because it might help in both guiding the theorem-prover and explaining its results. I find that philosophically fascinating because LLMs rely on some practices which are not fully understood, hence the article, and may validate combining formal logic with informal intuition as a way of understanding the world (both in mathematics, and generally the way our own minds combine logical reasoning with imprecise language and feelings).
This algorithm will happily predict whatever it was fed with, just ask Chat GPT to write the review of non-existing camera, car or washing machine, you will receive nicely written list of advantages of such item, so what it does not exist.
I really wish that had been my experience taking undergrad math courses.
Instead, I remember linear algebra where the professor would prove a result by introducing an equation pulled out of thin air, plugging it in, showing that the result was true, and that was that. OK sure, the symbol manipulation proved it was true, but zero understanding of why. And when I'd ask professors about the why, I'd encounter outright hostility -- all that mattered was whether it was proven, and asking "why" was positively amateurish and unserious. It was irrelevant to the truth of a result. The same attitude prevailed when it got to quantum mechanics -- "shut up and calculate".
I know there are mathematicians who care deeply about the why, and I have to assume it's what motivates many of them. But my actual experience studying math was the polar opposite. And so I find it very surprising to hear the idea of math being described as being more interested in why than what. The way I was taught didn't just not care about the why, but seemed actively contemptuous of it.
I majored in math at MIT, and even at the undergraduate level it was more like what OP is describing and less like what you're saying. I actually took linear algebra twice since my first major was Economics before deciding to add on a math major, and the version of linear algebra for your average engineer or economist (i.e.: a bunch of plug and chug matrices-type stuff), which is what I assume you're referring to, was very different. Linear algebra for mathematicians was all about vector spaces and bases and such, and was very interesting and full of proofs. I don't think actually concretely multiplying matrices was even a topic!
So I guess linear algebra is one of those topics where the math side is interesting and very much what all the mathematicians here are describing, but where it turned out to be so useful for everything, that there's a non-mathematician version of it which is more like what it sounds like you experienced.
Maybe because CS is more engineering than science (at least as far as what drives the sociology), a lot of people approach AI from the same industrial perspective -- be it applications to math, science, art, coding, and whatever else. Ideas like _the bitter lesson_ only reinforce the zeitgeist.
Which is to say, if you only concern yourself with theorems which have short, understandable proofs, aren't you cutting yourself off from vast swathes of math space?
If you're talking about questions that are well-motivated but whose answers are ugly and incomprehensible, then a milder version of this actually happens fairly often --- some major conjecture gets solved by a proof that everyone agrees is right but which also doesn't shed much light on why the thing is true. In this situation, I think it's fair to describe the usual reaction as, like, I'm definitely happy to have the confirmation that the thing is true, but I would much rather have a nicer argument. Whoever proved the thing in the ugly way definitely earns themselves lots of math points, but if someone else comes along later and proves it in a clearer way then they've done something worth celebrating too.
Does that answer your question?
Care or not, what are they supposed to do with it?
Sure, they can now assume the theorem to be true, but nothing stopped them from doing that before.
> the primary aim isn't really to find out whether a result is true but why it's true.
I'm honestly surprised that there are mathematicians that think differently (my background[0]). There are so many famous mathematicians stating this through the years. Some more subtle like Poincare stating that math is not the study of numbers but the relationship between them, while others far more explicit. This sounds more like what I hear from the common public who think mathematics is discovered and not invented (how does anyone think anything different after taking Abstract Algebra?).But being over in the AI/ML world now, this is my NUMBER ONE gripe. Very few are trying to understand why things are working. I'd argue that the biggest reason machines are black boxes are because no one is bothering to look inside of them. You can't solve things like hallucinations and errors without understanding these machines (and there's a lot we already do understand). There's a strong pushback against mathematics and I really don't understand why. It has so many tools that can help us move forward, but yes, it takes a lot of work. It's bad enough I know people who have gotten PhDs from top CS schools (top 3!) and don't understand things like probability distributions.
Unfortunately doing great things takes great work and great effort. I really do want to see the birth of AI, I wouldn't be doing this if I didn't, but I think it'd be naive to believe that this grand challenge can entirely be solved by one field and something so simple as throwing more compute (data, hardware, parameters, or however you want to reframe the Bitter Lesson this year).
Maybe I'm biased because I come from physics where we only care about causal relationships. The "_why_" is the damn Chimichanga. And I should mention, we're very comfortable in physics working with non-deterministic systems and that doesn't mean you can't form causal relationships. That's what the last hundred and some odd years have been all about.[1]
[0] Undergrad in physics, moved to work as engineer, then went to grad school to do CS because I was interested in AI and specifically in the mathematics of it. Boy did I become disappointment years later...
[1] I think there is a bias in CS. I notice there is a lot of test driven development, despite that being well known to be full of pitfalls. You unfortunately can't test your way into a proof. Any mathematician or physicist can tell you. Just because your thing does well on some tests doesn't mean there is proof of anything. Evidence, yes, but that's far from proof. Don't make the mistake Dyson did: https://www.youtube.com/watch?v=hV41QEKiMlM
People do look, but it's extremely hard. Take a look at how hard the mechanistic interpretability people have to work for even small insights. Neel Nanda[1] has some very nice writeups if you haven't already seen them.
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
It's not just that comments that vent denunciatory feelings are lower-quality themselves, though usually they are. It's that they exert a degrading influence on the rest of the thread, for a couple reasons: (1) people tend to respond in kind, and (2) these comments always veer towards the generic (e.g. "lack of true care for anything and anyone", "just another type of stupidity"), which is bad for curious conversation. Generic stuff is repetitive, and indignant-generic stuff doubly so.
By the time we get further downthread, the original topic is completely gone and we're into "glorification of management over ICs" (https://news.ycombinator.com/item?id=43346257). Veering offtopic can be ok when the tangent is even more interesting (or whimsical) than the starting point, but most tangents aren't like that—mostly what they do is replace a more-interesting-and-in-the-key-of-curiosity thing with a more-repetitive-and-in-the-key-of-indignation thing, which is a losing trade for HN.
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
That's the part I find unnerving
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.
Why Ai field is so secretive? Because it's all trade secrets - and maybe soon to become patents. You don't give away precisely how semiconductor fabs work, only base research level of "this direction is promising"
Why everyone is pushed to add Ai in? Because that's where the money is, that's where the product is.
Why Ai needs results fast? Because it's production line, and you create and design stuff
Even the core distinction mentioned - that Ai is about "speculation and possibility" - that's all about tool experimenting and prototyping. It's all about building and constructing. Aka Engineering/Technology letters of STEM
I guess next step is to ask "what to do next?". IMO, math and Ai fields should realise the divide and slowly diverge, leaving each other alone on an arm's length. Just as engineers and programmers (not computer scientists) already do
Long story short, current AI is doing cargo-cult math - ie, going through the motions with mimicry. Experts can see through it, but excited AI hypesters are blind, and lap it up. Even alpha-geometry (with built-in theorem prover) is largely doing brute-force search of a limited axiomatized domain. This is not to say AI is not useful, just that the hype exceeds the actual.
I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.
[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it
I'd include writing, art-, and music-making in that category.
This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.
AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.
Engineering has always involved large amounts of both math and secrecy, what's different now?
(But the engineers want the benefits of academic research -- going to conferences to give talks, credibility, intellectual prestige -- without paying the costs, e.g. actually sharing new knowledge and information.)
Not exactly AI by today's standards, but a lot of the math that they need has been rolled into their software tools. And Excel is quite powerful.
I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.
These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.
I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.
This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.
But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?
Or more generally, how do you verify a proof that you don't understand?
Just so this isn't misunderstood, not so much cutting-edge math is presently possible to code in lean. The famous exceptions (such as the results by Clausen-Scholze and Gowers-Green-Manners-Tao) have special characteristics which make them much more ground-level and easier to code in lean.
What's true is that it's very easy to check if a lean-coded proof is correct. But it's hard and time-consuming to formulate most math as lean code. It's something many AI research groups are working on.
I thought the rhetoric sounded somewhat like the AGI/accelerationist folks who postulate some sort of eventual "godlike" AI whose thought processes are somehow fundamentally inaccessible to humans. So if you had a proof that was only understandable to this sort if AIs, then mathematics as a discipline of understanding would be over for good.
But this sounds like it would at least theoretically let you tackle the proof? Like, it's imaginable that some AI generates a proof that is several TB (or EB) in size but still validates - which would of course be impossible to understand for human readers in the way you can understand a paper. But then "understanding" that proof would probably become a field of research of its own, sort of like the "BERTology" papers that try to understand the semantics of specific hidden states in BERT (or similar approaches for GPTs).
So I'd see an incomprehensible AI-generated proof not as the end of research in some conjecture, but more as a sort of guidance: Unlike before, you now know that the treasure chest exist and you even have its coordinates, you just don't have the route to that location. The task then becomes about figuring out that route.
This is the big question! Computer-aided proof has been around forever. AI seems like just another tool from that box. Albeit one that has the potential to provide 'human-friendly' answers, rather than just a bunch of symbolic manipulation that must be interpreted.
In this hypothetical Riemann Hypothesis example, the only thing the human would have to check is that (a) the proof-verification software works correctly, and that (b) the statement of the Riemann Hypothesis at the very beginning is indeed a statement of the Riemann Hypothesis. This is orders of magnitude easier than proving the Riemann Hypothesis, or even than following someone else's proof!
This really isn't about mathematics or AI, this is about the gap between academia and business. The academic wants to pursue knowledge for the sake of knowledge, while a business wants to make money.
Compare to computer science or engineering, where business has near completely pervaded the fields. I've never heard anybody lamenting their inability to "pursue understanding for its own sake" and when someone does advance the theory, there's also a conversation about how to make it profitable. The academic aspect isn't gone, but it's found a way to coexist with the business aspect, for better or worse.
Honestly it sounds like mathematicians have had things pretty good if this is one of their biggest complaints.
An issue in these discussions is that mathematics is both an art, a sport, and a science. And the development of AI that can build 'useful' libraries of proven theorems means different things for each. The sport of mathematics will be basically over. The art of mathematics will thrive as it becomes easier to explore the mathematical world. For the science of mathematics, it's hard to say, it's been kind of shaky for ~50 years anyway, but it can only help.
Modern AI is about "well, it looks like it works, so we're golden".
What I think Mathematicians should remind themselves is a lot of prestigious mathematicians, the likes of Cantor or Erdos, often only employed a handful of “tricks”/heuristics for their proofs over their career. They repeatedly and successfully applied these strategies into unsolved problems
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems
AI need not be Euler to be an important tool and figure within mathematics
I know this claim is often made but it seems obvious that in this discussion, trick means something far wider and more subtle than any set computer program. In a lot of ways, "he just uses a few tricks" is akin to the way a mathematician will say "and the rest of the proof is elementary" (when it's still quite long and hard for anyone not versed in a given specialty). I mean, before category theory was formalized, the proofs that now are possible with it might classified as "all done with this trick" but grasping said trick was far from elementary matter.
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems.
Not that LLMs can't do some impressive things but your narrative seems to anthropomorphize them in a less than useful way.
I fear AI is just going to lower our general epistemic standards as a society, and we forget essential truth verifying techniques in the technical (and other) realms all together. Needless to say the impact this has on our society's ethical and effectively legal foundations, because ultimately without clarity on how's and why's it will be near impossible to justly assign damages.
Mathematicians who practice constructive math and view existence proofs as mere intellectual artifacts tend to embrace AI, physics, engineering and even automated provers as worthy subjects.
In math, there's an urban legend that the first Greek who proved sqrt(2) is irrational (sometimes credited to Hippasus of Metapontum) was thrown overboard to drown at sea for his discovery. This is almost certainly false, but it does capture the spirit of a mission in pure math. The unspoken dream is this:
~ "Every beautiful question will one day have a beautiful answer."
At the same time, ever since the pure and abstract nature of Euclid's Elements, mathematics has gradually become a more diverse culture. We've accepted more and more kinds of "numbers:" negative, irrational, transcendental, complex, surreal, hyperreal, and beyond those into group theory and category theory. Math was once focused on measurement of shapes or distances, and went beyond that into things like graph theory and probabilities and algorithms.
In each of these evolutions, people are implicitly asking the question:
"What is math?"
Imagine the work of introducing the sqrt() symbol into ancient mathematics. It's strange because you're defining a symbol as answering a previously hard question (what x has x^2=something?). The same might be said of integration as the opposite of a derivative, or of sine defined in terms of geometric questions. Over and over again, new methods become part of the canon by proving to be both useful, and in having properties beyond their definition.
AI may one day fall into this broader scope of math (or may already be there, depending on your view). If an LLM can give you a verified but unreadable proof of a conjecture, it's still true. If it can give you a crazy counterexample, it's still false. I'm not saying math should change, but that there's already a nature of change and diversity within what math is, and that AI seems likely to feel like a branch of this in the future; or a close cousin the way computer science already is.
* AI could get better at thinking intuitively about math concepts. * AI could get better at looking for solutions people can understand. * AI could get better at teaching people about ideas that at first seem abstruse. * AI could get better at understanding its own thought, so that progress is not only a result, but also a method for future progress.
Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description
lol, took me a second to get the plausible reason for that
I don't think this is (generally) true? Speaking as a math postdoc right now, at least in my field of computational mathematics there's definitely a notion of first author. Though, a note of individual contributions at the bottom of the paper is becoming more common.
This seems very caricatural, one thing I've often heard in the AI community is that it'd be interesting to train models with an old data cutoff date (say 1900) and see whether the model is able to reinvent modern science
There is a major caveat here. Most 'serious math' in AI papers is wrong and/or irrelevant!
It's even the case for famous papers. Each lemma in Kingma and Ba's ADAM optimization paper is wrong, the geometry in McInnes and Healy's UMAP paper is mostly gibberish, etc...
I think it's pretty clear that AI researchers (albeit surely with some exceptions) just don't know how to construct or evaluate a mathematical argument. Moreover the AI community (at large, again surely with individual exceptions) seems to just have pretty much no interest in promoting high intellectual standards.
Wrong in the strict formal sense or do you mean even wrong in “spirit”?
Physicists are well-known for using “physicist math” that isn’t formally correct but can easily be made as such in a rigorous sense with the help of a mathematician. Are you saying the papers of the AI community aren’t even correct “in spirit”?
This is not the point, but the saying "there is no royal road to geometry" is far older than Gauss! It goes back at least to Proclus, who attributes it to Euclid.
The story goes that the (royal) pharaoh of Egypt wanted to learn geometry, but didn't want to have to read Euclid. He wanted a faster route. But, "there is no royal road to geometry."
Unless the royal pharaoh of Egypt, refers to Ptolemy I Soter, Macedonian general who was the first Ptolemaic Kingdom ruler of Egypt after Alexander's death.
In fact, the modern practice (the concept predates the practice of course, but was more of an opinion than a ritual) of mathematics as this ultimate understandable system of truth and elegance seemingly began in Ancient Greece with their practice of proofs and early development of mathematical "frameworks". It didn't reach its current level of rigor and sophistication until 100-150 years ago when Formalism became the dominant school of thought (https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathe...), spearheaded by a group of mathematicians who held even deeper beliefs that are often referred to as Mathematical Platonism (https://en.wikipedia.org/wiki/Mathematical_Platonism). (Note that these wikipedia articles are not amazing explanations of the concepts, how they relate to realism, or developed historically but they are adequate primers)
Of course, Godel proved that truths exists outside of these formal systems (only a couple decades after mathemticians had started building a secret religion around worshipping Logos. These beliefs were pervasive see eg Einsteins concept of God as a clockmaker or Erdos' references to "The Book"), which leaves us almost back where we started where we might need to consider there may be some empirical results and patterns which "work" but we do not fully understand - we may never understand them. Personally, I think this philosophically justifies not subjecting oneself to the burden of spending excess time understanding or proving things that have never been understood before - it may elude elegance (as the 4-color proof) or even knowability.
We can always look backwards and explain things later, and of course, it's a false dichotomy that some theorems or results must be fully understood and proven (or proven elegantly) before they can be considered true and used as a basis for further results. Perhaps it is unsatisfying to those who wish to truly understand the universe in terms of mathematical elegance, but that asshole used mathematical elegance to disprove mathematical elegance as a perfect tool for understanding the universe already, so take it up with him.
Personally, as someone who at one time heavily considered pursuing a life in mathematics in part because of its ability to answer deep truths, I think Godel set us free: to understand or know things, we cannot rely solely on mathematics. Formal mathematics itself tells us that there are things we can only understand by discovering them, building them, or experimenting with them. There are truths that Cuda Cowboys can uncover that LaTex Liturgy cannot
With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.
I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.
We clearly will soon have the technology for that .. but it requires a rich opinionated benefactor, or inspired government agency to fund the development .. or perhaps it can be done as an Open model variant through crowdsourcing.
An LLM personal assistant that detects my preferences and echoes my biases and massages my ego and avoids challenging me with facts and new ideas .. whose goal is to maximize screentime and credits for shareholder value .. seem to be where things are heading.
I guess this is an argument for having open models.
I thought I understood calculus until I realised I didn't. And that took a bit thwack in the face really. I could use it but I didn't understand it.
My point is human advisor does not have enough time, to answer questions and correctly explain the subject. I may get like 4 hours a week, if lucky. Books are just a cheap substitute for real dialog and reasoning with a teacher.
Most ancient philosophy papers were in form of dialog. It is much faster to explain things.
AI is a game changer. It shortens feedback loop from a week to hour! It makes mistakes (as humans do), but it is faster to find them. And it also develops cognitive skills while finding them.
It is like programming in low level C in notepad 40 years ago. Versus high level language with IDE, VCS, unit tests...
Or like farming resources in Rust. Booring repetitive grind...
And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).
Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.