Depending on what needs to be done with the CSV files, it's very possible to do it in 128MB of RAM. For example, if we need to read the rows, transform them a bit, and then write to another file, we can read up to N rows, transform them, and then write them. That should result in bounded memory consumption because only up to N rows need to be kept. Similar strategies are possible if the rows are used as input to an ETL job, calling a Web service with the results of parsing the file, etc.
Editing a file gets trickier, though it's not impossible. Maybe using a [piece table](https://en.wikipedia.org/wiki/Piece_table) plus some smart buffering the file can keep memory consumption below some constant, letting it function for large files, but with the downside of lower performance for files larger than whatever the constant is?