> gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time (current version gpt-4-32k-0314, also supported until June 14). Pricing is $0.06 per 1K prompt tokens and $0.12 per 1k completion tokens.
The context length should be a huge help for many uses.
Oh snap. I didn't even think about that!
That gives me a fun idea!
I've got a repo that I built and setup CI/CD and setup renovate to automatically upgrade dependencies and merge them when all the tests pass, but of course sometimes there are breaking changes. I don't actively work on this thing and hence it's just got issues sitting there when upgrades fail. It's the perfect testing ground to see if I can leverage it to submit PRs to perform the fixes required for the upgrade to succeed! That'll be hectic if it works.
UI stuff just has an input problem. But it is not that hard to think that ChatGPT could place widgets once it can consume images and has a way to move a mouse.
I see some FOSS-boosting silver linings in all of this.
or another option is having one instance or chat order code page and one that basically just has an API index and knows which chat has the related things.
In contrast, GPT-3.5 text-davinci-003 was $0.02/1k tokens, and let's not get into the ChatGPT API.
this is a lot. I bet there's a quite a bit of profit in there
Is this profit-seeking pricing or pricing that is meant to induce folks self-selecting out?
Genuine question — I don’t know enough about this area of pricing to have any idea.
Depends on what is up with the images and how they translate into tokens. I really have no idea, but could be that 32k tokens (lots of text) translates to only a few images for few-shot prompting.
The paper seems not to mention image tokenization, but I guess it should be possible to infer something about token rate when actually using the API and looking at how one is charged.
I'm not super versed on lang chain but that might be kinda what that solves...
>Image inputs are still a research preview and not publicly available.
Will input-images also be tokenized? Multi-modal input is an area of research, but an image could be converted into a text description (?) before being inserted into the input stream.
https://help.openai.com/en/articles/4936856-what-are-tokens-...
Now that you have read my answer, you owe me $0.01 because your brain might use this information in the future.
In the first case, you found/bought a book and read it. No one can or should make you pay for it, unless you stole the book.
In the second case, you found/bought a book then reprinted it infinitely and sold it for profit, ethically you should pay the author and legally you should be in violation of the law.
Even if you made a machine that ingests and recombines books automatically, and you keep that machine locked up and charge people for its use, it is the same scenario: the machine would be absolutely useless without the original books, those books cost people effort and money to produce, yet you pay those people nothing while the machine is basically an infinite money maker for you.
I hope the analogy makes sense.
When it comes to spam culture sure. But will we ever be there? "AI art" isn't impressive and will never be. It is impressive in the academic sense. Nothing more.