While you're at it, if there is a way you can get the Google Identity Platform PM to let us know when some form of multi-factor auth besides SMS will be supported, that would be great!
If they respond here, it should be as you mentioned. They are monitoring media on mentions.
If they don't, people can't fix their issues ( probably ) as fast.
But responding here gives the observing effect of having a higher priority. Which is sometimes "ridiculed" ( don't know a better describing term)
Either way, thanks to the PM for an update about this.
>While you're at it, if there is a way you can get the Google Identity Platform PM to let us know when some form of multi-factor auth besides SMS will be supported, that would be great!
When they respond they get inundated with random requests and ailments that they have exactly zero control on.
As per your link:
> If $XDG_CACHE_HOME is either not set or empty, a default equal to $HOME/.cache should be used.
So even if Huggingface is aware of that variable (and it probably is) that won't help at all.
[1] https://huggingface.co/transformers/v4.3.3/installation.html...
How should that be communicated by the user? A prompt when opening every application asking where you'd like the data, config, state, and cache dirs to live? And if so, how would the application figure out where that config is stored?
These env vars are explicitly only (with the exception of XDG_RUNTIME_DIR) for when the user cares to override them.
~ is a BASH tilde expansion for $HOME. So in this case Cloud Run would have been looking at /home/root/.cache/ which doesn’t exist (/root/.cache/ is what was being built), whereas another user would have /home/username/.cache/ and run as expected.
PS. I was initially going to call ~ an alias, but I checked myself and found it’s actually considered a BASH tilde expansion. While ~ alone operates as an alias, I learned there are all kinds of other uses for variations of ~ which hopefully someone will find useful:
http://www.gnu.org/software/bash/manual/html_node/Tilde-Expa...
Firstly, the article says that Cloud Run is setting $HOME to /home. Which means Huggingface would've been looking at /home/.cache, not /home/root/.cache. $HOME is not a base path to which the username is appended to get the homedir. It's the homedir itself.
I also assume this isn't the article author using a shorthand (ie writing "/home" when they mean "/home/someuser"), because the SO post they link to also says the same thing. So, as busted as it is, it does sound like Cloud Run is setting HOME to /home.
Secondly, and more importantly, my point is that if Cloud Run is setting HOME=/root at container build time and HOME=/home at container runtime, then any path rooted to $HOME is going to be different at build time vs runtime, regardless of what user the process in the container is running as.
$ docker run -it --rm -e HOME=/foo debian:10-slim sh -c 'echo $HOME/.cache'
/foo/.cache
$ docker run -it --rm -e HOME=/bar --user 1000 debian:10-slim sh -c 'echo $HOME/.cache'
/bar/.cache
So as good as it is to not run containerized processes as root, I don't think it makes any difference to this particlar issue.In short, it's a pain for the average person just trying to get something running.
For example, you want to mount /docker/my-app/mysql-container/var/lib/mysql into /var/lib/mysql of the container. Maybe you're doing this so you can view the files on the host system and not use the Docker Volumes abstraction, maybe you want to do backups on a per directory basis and Docker doesn't let you move separate volumes into separate directories, or any other reason. Some orchestrators like Docker Swarm won't create the local directory for you, so you just end up doing that yourself, with UID:GID of 1004:1004 (or whatever your local server user has).
Now, if you run the container with the default settings, which indeed use the root user, there is a good chance of it working (illustrated here with Docker):
> docker run --rm -e MYSQL_ROOT_PASSWORD="something-hopefully-long" -v "/docker/my-app/mysql-container/var/lib/mysql:/var/lib/mysql" mysql:5.7
... lots of output here
[Note] mysqld: ready for connections.
Because by default, even MySQL uses root inside of the container: > docker run --rm mysql:5.7 id
uid=0(root) gid=0(root) groups=0(root)
When you change it to another user without knowing which one you need, which is pretty common, it breaks: > docker run --rm -u 1010:1010 -e MYSQL_ROOT_PASSWORD="something-hopefully-long" -v "/docker/my-app/mysql-container/var/lib/mysql:/var/lib/mysql" mysql:5.7
[ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable
[ERROR] InnoDB: The innodb_system data file 'ibdata1' must be writable
[ERROR] InnoDB: Plugin initialization aborted with error Generic error
[ERROR] Plugin 'InnoDB' init function returned error.
[ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
[ERROR] Failed to initialize builtin plugins.
[ERROR] Aborting
As you can see, the error messages don't come from Docker or any other system that knows what's actually happening, but from the piece of software itself, in this case, MySQL. Not only that, but it doesn't give you actionable steps on its own and "Generic error" will only serve to confuse many of the newer users, who'll fail to understand what "The innodb_system data file 'ibdata1' must be writable" means.How many users do you think will understand what the error message means and will know how to solve it? How many container images out there won't give user friendly error messages and instead will just crash?
How many users do you think will notice the "Running as an arbitrary user" under the instructions for that particular container image [0]? How many container images will even support running as arbitrary users?
How many users do you think will be able to find the documentation for the parameters that they need for either Docker Compose [1] or specifying a user in Docker [2]?
To avoid that problem, you need to:
- know about users/groups and permissions management in GNU/Linux
- know what your user/group is if you're creating anything like bind mounts
- know how to set the user/group that's going to run inside of the container
- know that the external container you use will handle them properly (but also know how it's initialized, e.g. no additional config necessary, since some badly containerized pieces of software will start additional processes with UID/GIDs that have been set in some configuration file somewhere in the container)
And with all of that, how many users do you think will decide that none of the above is worth the hassle and just run their containers as root, regardless of any other potential risks in the future, as opposed to having problems now? If something is hard to do, it simply won't be done in practice, especially if things work without doing it.Though i'm not sure what can be done to improve the DX here, without making Docker aware of the attempts to access files inside of the container. Plus, managing users and groups has far too many approaches to begin with [3].
Links:
[0] https://hub.docker.com/_/mysql
[1] https://docs.docker.com/compose/compose-file/compose-file-v3/
[2] https://docs.docker.com/engine/reference/run/#user
[3] https://blog.giovannidemizio.eu/2021/05/24/how-to-set-user-and-group-in-docker-compose/My only advice is, when using this or cloud functions always start by creating a function/image that prints all env variables.
For cloud functions these changes tremendously between Python version. Also logging changes completely between some Python versions, to the point where upping the runtime causes logs to stop being saved on cloud logging.