As a thought experiment, I wonder what happens when the FISA court orders Riot to install a modified version on a suspected terrorist's computer. No need for privilege escalation when you can just ask the user to install it at ring-0.
That's the approach I've been taking for a long time now.
If you don't, you will always a) have your fun ruined by trying to be security conscious b) in the end, most likely give in and allow things you really shouldn't allow on a trusted machine because otherwise you can't achieve your task (getting a game to run).
So I have a game box, try to make sure that nothing important ever touches it (which is a huge PITA when game clients insist on forcing email-based 2FA on you), but in exchange I don't worry too much about its security.
That also fits nicely with games requiring Windows 10 and Windows 10 being so outright privacy- and user-hostile that I can't imagine running it on my primary machine.
They are tiptoeing quite carefully there.
I wonder if a lot of "collect or process" can be blocked by users, but with a kernel module actually prevents opt-out attempts and identifies everyone.
And yes, obviously you need to have a dedicated gaming PC and certainly not install any games or any software that isn't strictly necessary on the systems/VMs with important data.
I've also been installing more and more software into ~/bin rather than the more traditional /opt and /usr/local/bin. I think that the trend towards usermode software will take over in the next five years.
What are realistic security issues with ring0 access on personal computer? I bet most interesting stuff on personal computers is easily accessible with normal user privileges that every game client has.
Which is why the current tendency is towards more sandboxing, not less; things like flatpak on Linux, the app stores on Windows and Mac, the heavy sandboxing on phones, and so on. Running an in-kernel component for an application goes against that.
FISA? Try the CCP.
Bold message from a chinese company. People freak out about Huawei but Tencent is 1000% worse. And here they are installing a kernel driver on your PC.
Do you really think that after 100M people install this kernel driver that the Chinese government won't lean on Tencent to gain access, or use it beyond its original purpose?
Do you feel the same way about Microsoft and Apple, and every other company that provides a hardware driver for a modern computer, and whether state governments (USA included) put pressure on them to let them advance their agenda by using back doors in their drivers or software?
Why is Riot special in all this? What, in your view, makes them more likely to be so secretly and so deeply corrupted in the manner you suggest?
Note I'm not asking you if you run MacOS or Windows.
That in itself tells me enough about the efficacy of the system. Security through obscurity is only a hand wave of security. Making the trade off of all the security architecture put in place over the past decades for something that needs to be hidden to remain secure is a really poor value statement.
I understand why they want this in place, it does raise the level of effort on cheating but there are other ways this can be accomplished without compromising a user's security.
A user who installs a anti virus program wants that program to do its job and find bad actors. The virus on the other hand is completely unwanted by both the user and the software- Its existence is threatened by all fronts.
However, a anti-cheat lives in a extremely adversarial environment. The cheater (and the cheat) wants the cheat on its computer. As such, the user will be willing to do extra steps to assist the cheat. This makes the anti-cheat software in this case, the 'un-wanted' virus, so it has to exist in the most hostile of environments and somehow detect programs which have higher privileges than itself.
That said, Cheating is something that will not go away. Years and years ago, I developed with a friend of mine a completely undetectable cheat for all games on the HL2 platform. It involved a second computer, which man-in-the-middled all network data to the client computer. This second computer then would display a 'radar' of where enemies were. As the anti cheat would have no possible way of knowing the existence of this second computer, there was not much they could do.
If you wanted to get more aggressive with the system above, you could have that second computer modify outbound requests as well. So if you shoot your gun and it would have hit the ground, it will now instead shoot a enemy in the head- as such even something like a aimbot is entirely possible with this setup.
However, there is indeed a anti cheat which can detect all known cheats and its basically what Valve did/does for CS:GO - Allow users to report suspected cheaters and then have the community analyze the reports. This catches all blatant cheats, but unfortunately will never get rid of radar/esp cheaters, only aimbots and the like.
Honestly, it sounds to me like there is a business model in the above. Years ago we had companies like evenbalance/punkbuster, easy anticheat, etc.. which provided software based anti-cheat systems. As you would expect, most would by bypassed and a daily cat and mouse game would ensue. The solution imo is to create a SaaS where you essentially provide a reporting + monitoring tool. Users of your game can report suspected cheaters (which includes the demo file / vod / replay / whatever) and your trained wet-ware staff would review all reports and take action where necessary. No invasive software necessary. Actually, no software on the end users computer at all would be necessary- It is all done on another users PC.
In fact, if someone is interested in doing the above, hit me up. Sounds like a easy win.
> It involved a second computer, which man-in-the-middled all network data to the client computer.
Out of interest, was there no transport level encryption to deal with here? Or did you need to do something special to capture keys on the client?
Before CSGO moved to Steam Networking, the game itself encrypted the packets. I can't remember exactly when this was introduced, but it's still in place - see https://github.com/alliedmodders/hl2sdk/blob/acf932ae06b64b7...
[0] https://partner.steamgames.com/doc/features/multiplayer/netw...
As an example, for CSGO in the past, the server always sent all player positions from anywhere, so it was possible to create cheats to draw players anywhere in the map. They changed the way it's done, coordinates are only sent when other players are nearly visible, although distant, or close by. This limited the way that wallhacks work, it's not possible to see where players are from far away :)
What needs to be done is reverse engineer the communication protocol. If encryption is made, some kind of key to decrypt has to be somewhere in your game client. Then you can convert 3D coordinates to 2D and even draw a radar on your smartphone if you make an app.
I don't think it's a viable model because players are willing to do it for free, as CS:GO's Overwatch shows.
'Not invented here' is a blessing and a curse.
https://www.youtube.com/watch?v=ATkpqYmWt8k&feature=youtu.be
If Valve can mitigate hacking in CSGO without such an intrusive service, I am sure Riot can. I, myself, did a very, very, very poor job with an autoencoder to detect anomalous matches in Dota and caught a large amount of players abusing the system. As far as I know, CSGO anti cheat does involve an ML component.
My point is that a non-intrusive anti cheat, advanced analytics, and tracking of user feedback goes a long way.
Ofc, none of this matters. If the playerbase actually cared, they'd boycott or stay away. And I cannot remember the last time gamers ran a successful boycott campaign.
edit: Also read that uninstalling the game will not always uninstall the ring 0 anti cheat. I can't verify since I would never install this on my system, but for what it is worth: That is terrible IF true.
Serious players pay extra to queue up in a dedicated service for high tickrate servers and anti-cheats which I believe are rootkits as well.. not sure about any of this though.
CS:GO has a lot more hackers than games with more intrusive anticheats like Overwatch in my experience. Only solution is an invasive anticheat, machine learning, and trust factor systems.
Since they changed their launcher system a few months ago, it's been unusual to have to wait more than ~2 minutes for a new patch.
If you're like me and only played occasionally the updates would build up and take very long.
Most anti-cheats also scan all processeses memory and even files to detect know cheat signatures. They tend to run with high privileges and some take in-game screenshots for analysis. Basically they have permissions to do anything and receive silent updates.
I wonder if statistical methods to detect cheaters result in too many false positives.
I was surprised hearing this. It seems like what they actually did was if VAC already found something, it checked the hashes of the contents of the DNS cache against a list as a second check. That's quite a bit different from "intercepting DNS queries".
Overall VAC always made a reasonable impression on me as far as privacy and security are concerned (no SYSTEM services, no kernel driver, no screenshots, no scanning and uploading random files etc.), although this non-intrusive approach naturally limits the kinds of cheats it is able to discover. I feel like the approach taken by Vale is, on the whole, well balanced.
Source: https://www.pcgameshardware.de/Steam-Software-69900/Specials...
In this case, make the cartridge a bootable SSD which entirely avoids touching any other disk in the system (perhaps with the exception of an SD card or USB storage stick for saves.)
The downsides include:
- the game company now has to ship a complete OS and do hardware support. They nearly have to do that anyway, so whatever.
- you'll need to reboot your computer for each game.
The upsides, I think, are obvious.
There are outstanding issues to resolve there, like input lag and visual fidelity, but it certainly removes the ability to cheat at the system level by hooking into game processes and memory.
Aimbots would be still be theoretically possible through MITM video feed analysis (as has been speculated) but that would also work in your cartridge scenario.
This anti cheat software is for their new game valorant which is a counter strike like shooter.
Reading through this, it seems the game development world is doing the exact opposite and pushing all the "security" measures to the client. Is that incorrect? If it's correct, does anybody have any idea why?
I’m not saying what valorant has done here is right, there are other things you can do. But you’re oversimplifying the problem.
Plus, there seems to be a lot of focus on client-side anti-cheat when a lot of it could be addressed server-side:
> For example an aimbot that steadies your cursor on someone’s head or dodges automatically when a projectile is inbound.
This sounds like a similar problem to "like" fraud and things like that. Couldn't it be addressed by measuring the number of incidents? If someone is able to headshot or dodge at an abnormal/superhuman level, that can be detected server-side and the user banned (or flagged for human review).
> Maybe the client hijacks the UI to hide terrain and walls.
Someone mentioned a solution for this elsewhere in the thread: don't send positions of important resources to the client if it doesn't need them. Keep the client about as blind as the player.
And again, you should be able to detect this server-side. If somebody has an abnormally high kill-rate for enemies coming around a corner, flag them for review.
At that point, just make your own game, or easier yet, play another one.