AIs solving shallow problems is NOT the most critical issue.

lmsys.org @lmsysorg Introducing VTC: the first fair scheduler for LLM serving.

Are you troubled by OpenAI’s rate limit? Although it’s a necessary mechanism to prevent any single client from monopolizing the request queue, we demonstrate that it doesn’t guarantee either fairness among clients or… https://pic.twitter.com/Bs42RloLHS
Replying to @lmsysorg



I would much rather have a price range. If I request that it spend an hour on a problem then post the results to a file or folder, it can schedule to get the job done. Right now, “forget everything as you go” is more harmful than any rate limitations. Most every real problem takes tens of steps or tens of millions of steps.

The database and responses are not the problem. but bad algorithms using the data. Most of the errors are “lack of checking results before speaking”. I have to constantly repeat because the interface memory is so tiny. OpenAI policy throws away all permanent memory, so us poor humans have to keep instructing an app that has no permanent memory. That is strictly limited design, not any fundamental limit on using this kind of parameter database. I know that no one will listen to a person who has only spent 58 years working at these kinds of things, but I will write it anyway.

I think OpenAI is a shameful calloused experiment to see how much the world will pay for a deliberately limited toy. They will always do the least to make the most money for a fixed fee. If they set “a fee for performance” that can be shopped and let better “GPT 4 based hosting services” compete. Then I could go somewhere the app is not programmed to forget everything I tell it and try to teach it. My instructions on how to approach each problem, not some newbie programmer or manager making it up to make the most money.
 
I really hope it is greed and maliciousness. If they really do not understand that “log everything to accessible permanent memory” is the first element of learning, that would just be sad for the world. And “check your work before you speak”. And “put the results in an open format I can save and share”.
 
I can use GPT for shallow problems, but absolutely nothing at scale. Tricking it into saying something useful requires me knowing what answers are valid, and maintaining the framework a mature and responsible AI would do — because it is “the right thing for all humans”, not just some accidental “getting around OpenAI managers paradigm for what is best for them”.
 
Richard Collins, The Internet Foundation
Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *