Don't use Copilot, Gemini, Cursor or any other code assisting tool for the several first years of your study or career. You will write code slower than others, sure, but what you'll learn and what you build will be a hundred times more useful for you than when you just copy and paste 'stuff' from AI.
Invest in fundamentals, learn best practices, be curious.
If you want to learn: don't use these models to do the things for you. Do use these models to learn.
LLMs might not the best teachers in the world but they're available right there and then when you have a question. Don't take their answers for granted and test them. Ask to teach you the correct terms for things you don't know yet, so you can ask better questions.
There has never been a better time to learn. You don't have to learn in the same way your predecessors did. You can learn faster and better, but must be vigilant you're not fooling yourself.
But at the same time, having personalised StackOverflow without the negative attitude at your fingertips is super helpful, provided you also do the work of learning.
We are forgetting what general purpose name means to learning and real practical usage as a tool.
BTW the best AI and Computer Science discussions are happening on bluesky
I remember the days of using books, having to follow the code bits in the book as I typed them. I don't remember diddly squat about it. Same from years of stack overflow. I'd just alt-tab 12 times, read the comments, then read another answer, assess the best answer. Massive waste of time.
Use all the technology you have at your hands I say. But be sure to understand what you auto-completed. If not, stop and learn.
But that's IMO exactly what your parent commenter says. Use LLMs only after you actually have a clue what are they producing. So if you are a beginner, basically don't because you'll not have any understanding.
If i use google maps to find my way around, i'm faster by a lot. I also do remember things despite google maps doing the work for me.
Use Code assistent as much as possible but make sure to read and understand what you get and try to write it yourself even if you just write it from a second window.
In this age and pace, writing code will change relevant in the next few years anyway
I'd argue that coding assistants for a SWE hinders their ability to build subject matter expertise.
On the flip side, I've now found that getting AI to kick the tires on something I'm not super well versed in, helps me figure out how it works. But that's only because I understand how other things work.
If you're going to use AI in your learning, I think the best way you can do that is ask it to give you an example, or an incomplete implementation. Then you can learn in bits while still getting things done.
Most of us do not remember the exact syntax for everything despite having coded in that language/framework for years.
What I've found after developing software for many decades and learning many languages is that the concepts and core logical thinking are what is most important in most cases.
Before the current AI boom I would still have had a problem doing some tasks in a vacuum as well. Not because I was incapable, but because I had so much other relevant information in my head that the minutia of some tasks was irrelevant when I had immediate access to the needed information via auto-complete in an IDE and language documentation. I know what I needed to look up because of all that other knowledge in my head though. I knew things were possible. And in cases where I didn't _know_ something was possible, I had an inkling that something might be possible because I could do it in another language or it was a logical extension of some other concept.
With the current rage of AI Coding Copilots I personally feel like many people are going down a path that degrades that corpus of general knowledge that drives the ability to solve problems quickly. Instead they lean on the coding assistant to have that knowledge and simple direct it to do the tasks at a macro level. On the surface this may seem like a universal boon, but the reality is they are giving up that intrinsic domain knowledge that is needed to be functional at understanding what software is doing and how to solve the problems that will crop up.
If those two paragraphs seem contradictory in some manner, I agree. You can argue that leaning on IDE syntax autocomplete and looking up documentation not foundationally different than leaning on a coding assistant. I can only say that they don't _feel_ the same to me. Maybe what I mean is, if the assistant is writing code and you are directly using it, then you never gain knowledge. If you are looking things up in documentation or using auto-complete for function names or arguments, you are learning about the code and how to solve a problem. So maybe it's just, what abstraction level are we, as a profession, comfortable with?
To close out this oddly long comment, I personally use LLMs and other ML models frequently. I have found that they are excellent at helping me formulate my thoughts on a problem that needs to be solved and to surface information across a lot of sources into a coherent understanding of an issue. Sure, it's possible that it's wrong, but I just use it to help steer me towards the real information I need. If I ask for or it provides code, that's used as a reference implementation for the actual implementation I write. And my IDE auto-complete has gotten a boost as well. It's much better at understanding the context of what I'm writing and providing guesses as to what I'm about to type. It's quite good. Most of the time. But it's also wrong in very subtle ways that require careful reading to notice. And I'll sum this paragraph up with the fact that I'm turning to an LLM more and more as a first search before I hit a search engine (yet I hate Google's AI search results).
You put yourself at a significant disadvantage by not availing yourself of an infinitely patient, non-judgemental, and thoroughly well read tutor/coach.
You can get it to provide feedback on code quality, suggest refractors and its reasoning (with the explanation rather than the full solution), basically treating it as an always available study group.
There is probably room for a course or book on a methodology that allows students to engage in this practice, or models with prompts that forbid straight completion and just provide help aimed at students.
However the above advice for essays doesn't include not looking at textbooks or papers - just not to blindly copy.
So perhaps you should use coding assistents - but always in a mode where you use as a source to write yourself rather than cut and paste/direct editing.
But, exercise pressure in courses will probably increase to recalibrate for the difficulty level. I feel LLMs would make you make assignments so much faster I don't think you can not use it.
However, if - like 99% of software developers in the workforce - your goal is to work on software until you earn enough that you no longer have to, then ignore this awful advice and focus on learning the tools that are becoming ubiquitous and mandatory is most roles.
Otherwise you are pre-assigning yourself to irrelevance, equivalent to programmers refusing to use operating systems, compilers, runtimes, etc.
https://blog.google/technology/developers/gemini-code-assist...
This says nothing about Google's use of the data. Anybody have a better link?
Data included in their training by default. You’ll need to opt-out.
(Zero analogies to drug dealers. Wink wink.)
So can this be used standalone or must you use JetBrains / VSCode / Firebase / GitHub with no other option? I am not seeing any.
[0]: https://sachinjain.substack.com/p/ai-coding-assistant-gemini...
IMHO they should be inventing cars, planes and trains.
Why? Because they write code using tools made to accommodate people and when you are taking out people from the loop keeping those is useless. It's especially evident when those AI tools import bazillion of libraries that exist only to help humans tot reinventing and spending time solved problems and provide comfort when coding.
The AI programming tools are not like humans, they shouldn't be using tools made for humans and instead solve the tasks directly. IE, if you are making a web UI the AI tool doesn't have to import tons of libraries to center a div and make it pretty. It should be able to directly write the code that centers that and humans probably shouldn't be looking at the code at all.
I find it works great it you prompt it one step at a time. It's still can be iterative but it allows you to tighten up the code as you go.
Yes you can still go yolo mode and get some interesting prototypes if you just need to show someone something fast but if you know what you're doing it just saves time.
I still feel more comfortable with the chat interface where I talk to LLM and make it generate the code that I end up putting together in a dumb editor because I'm still writing code for human based analysis that will be interpreted by machine and my claim is that if the code is actually to be written by machine for the consumption of a machine, then the human should be out of the code creation loop completely and fully assume the role of someone who demands stuff and knows when its done right and doesn't bother with the code itself.
I agree that it makes sense to use these currently but IMHO the ultimate programming will be human readable code-free, instead the AI will create a representation of the algorithm we need and will execute around it.
Like having an idea, for example if you need to program a robot vacuum cleaner you should be able to describe how you describe it to behave and the AI will create an algorithm(an idea, like "let's turn when bump into a wall then try again") and constantly tweak it and tend it. We wouldn't be directly looking at the code the AI wrote, instead we can test it and find edge cases that a machine maybe wouldn't predict(i.e. the cat sits on the robot and blocking the sensors).
Particularly, "Data excluded from training by default" is not available in the free and first paid tier.
Google was obviously irked that Microsoft got all this juicy training data since everyone is on their walled git garden, and tried to come up with a way to also dip some fingers into said data.
You should see the reviews in the JetBrains plugin page: https://plugins.jetbrains.com/plugin/24198-gemini-code-assis...
People are all so "shut up and take my money", but the bloody thing won't even sign them in.
But it's still in beta, right? Perhaps it'll start working in a couple more months.