这是用户在 2025-6-7 8:12 为 https://www.reddit.com/r/cursor/comments/1kqj7n3/cursor_intentionally_slowing_nonfast_requests/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to main content  跳到主要内容 Cursor intentionally slowing non-fast requests (Proof) and more. : r/cursor
r/cursor icon
Go to cursor
r/cursor
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

Cursor intentionally slowing non-fast requests (Proof) and more.
Cursor 故意减慢非快速请求(证据)及更多。

Cursor team. I didn't want to do this, but many of us have noticed recently that the slow queue is significantly slower all of the sudden and it is unacceptable how you are treating us. On models which are typically fast for the slow queue (like gemini 2.5 pro). I noticed it, and decided to see if I could uncover anything about what was happening. As my username suggests I know a thing or two about hacking, and while I was very careful about what I was doing as to not break TOS of cursor, I decided to reverse engineer the protocols being send and recieved on my computer.
Cursor 团队。我不想这样做,但我们许多人最近注意到慢速队列突然变得明显更慢,你们对待我们的方式令人无法接受。对于通常在慢速队列中表现快速的模型(如 Gemini 2.5 Pro),我注意到了这一点,并决定看看能否发现正在发生的事情。正如我的用户名所示,我对黑客技术略知一二,虽然我非常小心自己的行为,以免违反 Cursor 的服务条款,但我决定逆向工程我电脑上发送和接收的协议。

I set up Charles proxy and proxifier to force capture and view requests. Pretty basic. Lo and behold, I found a treasure trove of things which cursor is lying to us about. Everything from how large the auto context handling is on models, both max mode and non max mode, to how they pad the numbers on the user viewable token count, to how they are now automatically placing slow requests into a default "place" in the queue and it counts down from 120. EVERY TIME. WITHOUT FAIL. I plan on releasing a full report, but for now it is enough to say that cursor is COMPLETELY lying to our faces.
我设置了 Charles 代理和 Proxifier 来强制捕获和查看请求。这操作挺基础的。结果你猜怎么着,我发现了一大堆 Cursor 骗我们的“宝藏”。从模型(包括最大模式和非最大模式)的自动上下文处理有多大,到他们如何在用户可见的 token 计数上注水,再到他们现在如何自动将慢速请求放入队列中一个默认的“位置”,并且每次都从 120 开始倒计时,无一例外。我计划发布一份完整的报告,但目前足以说明的是,Cursor 彻头彻尾地在欺骗我们。

I didn't want to come out like this, but come on guys (Cursor team)! I kept this all private because I hoped you could get through the rough patch and get better, but instead you are getting worse. Here are the results of my reverse engineering efforts. Lets keep Cursor accountable guys! If we work together we can keep this a good product! Accountability is the first step! Attached is a link to my code: https://github.com/Jordan-Jarvis/cursor-grpc With this, ANYONE who wants to view the traffic going to and from cursor's systems to your system can. Just use Charles proxy or similar. I had to use proxifier as well to force some of the plugins to respect it as well. You can replicate the screenshots I provided YOURSELF.
我本不想把这些公之于众,但拜托了各位(Cursor 团队)!我一直把这些事藏着掖着,是希望你们能渡过难关,变得更好,结果你们却越来越糟。这是我逆向工程的成果,大家一起来监督 Cursor 吧!只要我们齐心协力,就能让这款产品保持优秀!问责是第一步!代码链接在此:https://github.com/Jordan-Jarvis/cursor-grpc 有了它,任何想查看 Cursor 系统与你系统之间流量的人,都可以做到。只需使用 Charles proxy 或类似工具即可。我还得用 Proxifier 强制一些插件也遵守代理设置。你可以自己复现我提供的截图。

Results: You will see context windows which are significantly smaller than advertised, limits on rule size, pathetic chat summaries which are 2 paragraphs before chopping off 95% of the context (explaining why it forgets so much randomly). The actual content being sent back and forth (BidiAppend). The Queue position which counts down 1 position every 2 seconds... on the dot... and starts at 119.... every time.... and so much more. Please join me and help make cursor better by keeping them accountable! If it keeps going this way I am confident the company WILL FAIL. People are not stupid. Competition is significantly more transparent, even if they have their flaws.
结果:你会看到上下文窗口比宣传的要小得多,规则大小受限,可怜的聊天摘要只有两段,然后就砍掉了 95%的上下文(这解释了为什么它会随机遗忘这么多)。实际来回发送的内容(BidiAppend)。每两秒倒数一个位置的队列位置……准时……每次都从 119 开始……等等。请加入我,通过让他们承担责任来帮助 Cursor 变得更好!如果继续这样下去,我确信这家公司会失败。人们不傻。竞争对手透明得多,即使他们有缺点。

There is a good chance this post will get me banned, please spread the word. We need cursor to KNOW that WE KNOW THEIR LIES!
这篇文章很有可能会让我被封号,请大家广为传播。我们需要让 Cursor 知道我们知道他们的谎言!

Mods, I have read the rules, I am being civil, providing REAL VERIFIABLE information, so not misinformation, providing context, am NOT paid, etc.. If I am banned, or if this is taken down, it will purely be due to Cursor attempting to cover their behinds. BTW, if it is taken down, I will make sure it shows up in other places. This is something people need to know. Morally, what you are doing is wrong, and people need to know.
各位版主,我已阅读版规,我言行文明,提供真实可验证的信息,绝非不实信息,并提供了背景情况,我未收取任何报酬,等等。如果我被封禁,或者此帖被删除,那纯粹是 Cursor 试图掩盖其不当行为。顺便说一句,如果此帖被删除,我将确保它出现在其他地方。这是人们需要知道的事情。你们所做的事情在道德上是错误的,人们需要知道。

I WILL edit or take this down if someone from the cursor team can clarify what is really going on. I fully admit I do not understand every complexity of these systems, but it seems pretty clear some shady things are afoot.
如果 Cursor 团队的某位成员能澄清到底发生了什么,我将编辑或删除此帖。我完全承认我并不理解这些系统的所有复杂性,但似乎很明显,有些不光彩的事情正在发生。

  • r/cursor - Cursor intentionally slowing non-fast requests (Proof) and more.
Sort by:   排序方式:
Best
Edited   已编辑
Dev   开发者
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

Hey! Just want to clarify a few things.
嘿!只想澄清几件事。

The main issue seems to be around how slow requests work. What you’re seeing (a countdown from 120 that ticks down every 2 seconds) is actually a leftover protobuf artifact. It's not connected to any UI, just for backwards compatibility with very old clients
主要问题似乎围绕着慢速请求的工作方式。你所看到的(一个从 120 开始每 2 秒倒计时的计时器)实际上是一个遗留的 protobuf 产物。它与任何 UI 都没有关联,仅用于与非常旧的客户端进行向后兼容。

Now, wait times for slow requests are based entirely on your usage. If you’ve used a lot of slow requests in a given month, your wait times may be longer. There’s no global queue or fixed position anymore. This is covered in the docs here:
现在,慢请求的等待时间完全取决于你的使用情况。如果你在一个月内使用了大量慢请求,你的等待时间可能会更长。不再有全局队列或固定位置了。这在文档中有所提及:

https://docs.cursor.com/account/plans-and-usage#how-do-slow-requests-work

In general, there are a lot of old and unused protobuf params still there due to backwards compatibility. This is probably what you're seeing with summaries as well. A lot of the parameters you’re likely seeing (like cachedSummary) are old or unused artifacts. They don’t reflect what’s actually being sent to the model during a request.
总体而言,由于要保持向后兼容,许多旧的、未使用的 protobuf 参数仍然保留着。这可能就是你在摘要中看到的情况。你可能看到的许多参数(比如 cachedSummary)都是旧的或未使用的“遗留物”,它们并不能反映请求期间实际发送给模型的内容。

On context window size, the actual limits are determined by the model you’re using. You can find the specific context sizes and model details here:
关于上下文窗口大小,实际限制取决于你正在使用的模型。你可以在这里找到具体的上下文大小和模型详情:

https://docs.cursor.com/models#models

Appreciate you raising this. Some of what you’re seeing was real in older versions, but it no longer reflects how the system works. We’ll keep working to make the behavior clearer and more transparent going forward.
感谢你提出这个问题。你所看到的一些情况在旧版本中确实存在,但现在系统的工作方式已经不是这样了。未来我们会继续努力,让系统行为更清晰、更透明。

Happy to follow up if you have more questions
如果您有更多问题,我很乐意跟进

}

nonsense i have to wait for 5 mins for slow requests. u expect us to buy another sub
胡说八道,我得等 5 分钟才能处理慢请求。你指望我们再买一个订阅吗

}
More replies

"Now, wait times for slow requests are based entirely on your usage." Bro, i literally was fine yesterday and day before, today everything is 5000% longer, or did you changed it this weekend?
"现在,慢请求的等待时间完全取决于你的使用情况。" 兄弟,我昨天和前天都好好的,今天所有事情都慢了 5000%,还是你们这个周末改了?

}
Profile Badge for the Achievement Top 1% Commenter Top 1% Commenter
Profile Badge for the Achievement Top 1% Commenter 1% 评论者

Did you, by any chance, use a lot of requests between "yesterday and the day before"?
你是不是在“昨天和前天”之间发出了大量请求?

}

Not more then days before, actually i would say i made less calls this weekend than normal.
实际上,并没有比前几天多,我甚至觉得这个周末的调用量比平时还少。

}
11 more replies More replies  更多回复
More replies
[deleted]  [已删除]

Comment removed by moderator

}
Edited

So we talking about different problem, you pointing to something thats is here with us many months, but everyone who opened cursor today and not using premium calls or their api, had same problem... Do you know about it?

edit: it was comment wroted by ecz-

}
More replies
More replies
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

I am sorry, but I call bull on most of this. I admit some may be artifacts and no longer used, but I have done a good amount of reverse engineering. I have been observing this for several months now. Hoping things would change.. The countdown in the past was usually at most 15, and went down very quickly. Sometimes skipping several. Like a real queue would funcion. The change in the last 48hours or so has been drastic, and the behavior changed. Like a timer. The request will not start processing until the value is -1 and that is Still the truth. So I KNOW that one is a lie.. If anyone here wants to verify this themselves they can. They can use charles proxy for 30 days for free, and the proto is available. I understand there is a lot of file syncing, BidiAppends, diff checking, etc... I have seen those requests too. I see what my computer is sending and getting back. The newly implemented dry run system for token counts etc.... Respectfully, I understand you need to provide a professional response, but I have evidence otherwise. And now anyone who wants to take a few minutes themselves can figure it out as well. I know the hex encoded prompt system you guys use. I understand a lot more than I am letting on. In fact I even have a few vulnerabilites I have neglected to report. I can and will use proper channels for those though. I don't want to cause any more grief than I already have to you and your team.
抱歉,恕我直言,您说的这些大部分都是胡扯。我承认有些可能是遗留代码,已经不再使用,但我确实做了大量的逆向工程。我已经观察这种情况好几个月了,一直希望能有所改变。以前的倒计时通常最多是 15,而且下降得非常快,有时甚至会跳过好几个数字,就像一个真正的队列那样运作。然而,最近 48 小时左右的变化是巨大的,行为也变了,就像一个计时器。请求直到数值变为-1 才会开始处理,这仍然是事实。所以我知道您说的那个是谎言。如果这里有人想自己验证,他们可以。他们可以使用 Charles Proxy 免费试用 30 天,而且协议是公开的。我明白有很多文件同步、BidiAppends、差异检查等等……我也见过那些请求。我清楚我的电脑发送和接收了什么。新实现的用于令牌计数等的“空运行”系统……恕我直言,我理解您需要给出专业的回复,但我有相反的证据。现在任何想花几分钟自己验证的人也能搞清楚。我知道你们使用的十六进制编码提示系统。 我比我表现出来的懂得多得多。事实上,我甚至有一些漏洞一直没有报告。不过,我将并且会通过适当的渠道报告这些漏洞。我不想给你和你的团队带来更多麻烦。

}

+1 to this. I guarantee there has been a change here whether intended or not within the past 48 hours and I hope the team will revert it.
我也这么认为。我敢保证,在过去 48 小时内,这里肯定发生了变化,无论是有意还是无意,我希望团队能恢复原状。

}

u/Da_ha3ker knows more than he's letting on.. cursor better be scared!
u/Da_ha3ker 知道的比他表现出来的多……Cursor 最好小心点!

this subreddit is unbelievable
这个 subreddit 简直令人难以置信

}
More replies

they are vibe coding cursor... thats the truth
他们正在凭感觉编写 Cursor 代码……这就是真相

}
More replies
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

Not only requests, but the application itself. System prompts, binaries, etc...
不仅是请求,应用程序本身也是如此。系统提示、二进制文件等……

}

Hi, if you got deleted, I would like to rise it on forum cursor com
嗨,如果你被删除了,我想在 forum cursor com 上提出这个问题

Is there any dm or something in reddit
Reddit 上有私信或其他联系方式吗

}
More replies
[deleted]  [已删除]
Edited   已编辑

Comment removed by moderator

}

How did we go from "Happy to follow up if you have more questions" to your response in one message? Not a good look.

}
3 more replies More replies

Lying to his face and asking him to do you guys a favor? He isn’t the first to reverse engineer and expose stuff like this. There was a similar post about models and context a few weeks ago that was just mass deleted.

OP provided concrete examples and has obviously put in a ton of time into this. Your response reads like a vague, standard PR response.

}
2 more replies More replies
Profile Badge for the Achievement Top 1% Poster Top 1% Poster

Will do, thanks

}
More replies
More replies
Profile Badge for the Achievement Top 1% Commenter Top 1% Commenter
Profile Badge for the Achievement Top 1% Commenter 1% 评论者

Honestly not sure if you think someone is actually buying that, like the changes to context handling and queues aren't noticeable... you guys didn't go from 50ms to 60ms... you should be aware when you are artificially making the service worse to force people pay for MAX that... well... the service is going to be worse...
说实话,我不知道你是否认为有人真的会相信这种说法,比如上下文处理和队列的变化不明显……你们不是从 50 毫秒变成 60 毫秒……你们应该清楚,当你们人为地让服务变差以迫使人们为 MAX 付费时……那么……服务就会变差……

And the fact that anyone half active in reddit have actually seen you lying and banning posts like OP described doesn't help you case...
任何在 Reddit 上稍微活跃点的人,都亲眼见过你撒谎、封帖,就像楼主描述的那样,这可对你一点帮助都没有……

}
Edited   已编辑

My responses from Gemini pro early yesterday were within 10 seconds. Today they're taking over 4 minutes every time. This happened on Monday, the 5th of May, as well, and my slow requests don't regenerate for another 9 days. Your hypothesis about usage in a given month isn't correct in this case.
昨天早些时候,我从 Gemini Pro 那里得到的回复都在 10 秒内。今天每次都要花 4 分钟以上。这事在 5 月 5 日周一也发生过,而且我的慢请求要再过 9 天才能重新生成。你关于给定月份使用量的假设在这种情况下是不正确的。

Is there any chance it's on Google's end rather than Cursors? Surely you guys must be monitoring average response times across the models.
有没有可能这是谷歌那边的锅,而不是 Cursor 的问题?你们肯定一直在监控各个模型的平均响应时间吧。

It's just strange, many people seem to notice these massive slowdowns at the same time, and there's no communication from Cursor about what causes them.
真是奇怪,很多人似乎同时注意到这些严重的卡顿,但 Cursor 官方却对此只字不提,也不解释原因。

}
Profile Badge for the Achievement Top 1% Commenter Top 1% Commenter
Profile Badge for the Achievement Top 1% Commenter 1% 评论者

Similar story here. Overnight change from modest wait times for 2.5 Pro to intolerable ones without any unusual burst in usage.
我这里也遇到了类似的情况。一夜之间,2.5 Pro 版的等待时间从适度变成了无法忍受,而且使用量并没有异常激增。

I enabled usage based billing for Premium and bam, instantly back to snappy. So clearly it's not some general problem with Google or Cursor's interface with same.
我为 Premium 版启用了按使用量计费,然后,瞬间就恢复了流畅。所以很明显,这不是 Google 或 Cursor 接口的普遍问题。

I actually think paying for the additional Premium calls is perfectly fair and have no problem with this in itself. Ultimately withdrawing unlimited use is something they have to do to be commercially viable.
我实际上认为为额外的 Premium 调用付费是完全公平的,对此本身没有异议。最终取消无限使用是他们为了商业可行性必须做的事情。

But Cursor has previously made a huge deal about unlimited and this kind of user hostile gaslighting is a bad look. Cursor does way too much of that.
但 Cursor 之前曾大肆宣传“无限”,而这种对用户不友好的“煤气灯效应”观感很差。Cursor 这种做法太多了。

They should be honest and upfront, just sunset unlimited use / cripple the slow pool / whatever is is they want to do - but with clear notice, ideally 12 months so people who signed up for a year for that feature aren't screwed over.
他们应该诚实坦率,直接取消无限使用/削弱慢速池/无论他们想做什么——但要提前明确通知,最好是提前 12 个月,这样那些为此功能签约一年的人就不会被坑。

}
More replies
More replies

After these changes, would it be possible to request a refund?
这些改动之后,是否可以申请退款?

Think about those, like myself, who subscribed for a full year, and now find the behavior of slow requests has been altered.
考虑一下那些像我一样订阅了一整年,现在却发现慢请求行为被改变的用户。

Frankly, I don’t think it’s fair.
坦白说,我觉得这不公平。

Something is changed…
有些东西变了……

}
More replies
Edited   已编辑

I spend 10+ hours on cursor every day. Working both on my regular job and then side project(s).
我每天在 Cursor 上花费 10 多个小时。既要完成我的日常工作,又要处理副业项目。

I run out of my 500 requests really quickly. Until today, the regular "slow requests" were still workable for me. But I could clearly tell the difference today. It has been painfully slow up to the point that it's not usable anymore.
我的 500 次请求很快就用完了。直到今天,常规的“慢请求”对我来说仍然可以接受。但我今天能明显感觉到不同。它变得异常缓慢,以至于无法再使用了。

Probably going to try windsurf tonight to see if there's a viable alternative.
晚上可能会尝试 Windsurf,看看是否有可行的替代方案。

Edit: Decided to test GitHub Copilot after catching Microsoft's keynote yesterday. What a game-changer! For half the price, you get unlimited access to gpt 4.1 (at least for now). It’s night and day compared to Cursor’s slog. Unsubscribed from Cursor faster than anything I've ubsubscribed from ever.
编辑:昨天看了微软的主题演讲后,决定测试 GitHub Copilot。真是颠覆性的产品!价格减半,却能无限次访问 GPT 4.1(至少目前是这样)。与 Cursor 的缓慢相比,简直是天壤之别。我退订 Cursor 的速度比我以往退订任何东西都快。

What a massive fumble by them!
他们真是犯了个大错!

}

Yes, if this is how Cursor is now permanently then I’ll also be migrating to Windsurf today too…
是的,如果 Cursor 现在永久性地变成这样,那我今天也要迁移到 Windsurf 了……

}
More replies
More replies
}

Omg this just happened 🤣 perfect timing.
天哪,这事刚发生 🤣 时机太完美了。

}
More replies
Profile Badge for the Achievement Top 1% Commenter Top 1% Commenter
Profile Badge for the Achievement Top 1% Commenter 1% 评论者

I'm more interested in the purported context size, rule and chat limits etc. To me slow requests are just price discrimination and I expect them to make it more painful than fast requests to encourage paying more than flat rate for over 500 fast requests.
我更感兴趣的是其所谓的上下文大小、规则和聊天限制等。对我来说,慢速请求只是价格歧视,我预计他们会使其比快速请求更痛苦,以鼓励用户支付超过 500 次快速请求的固定费率。

The other stuff is more concerning if it affects core fast request functionality.
如果其他内容影响核心快速请求功能,那会更令人担忧。

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

Yup, they have a "dryRun" call they run to simulate how large the context window is so it seems larger for people looking at it in the UI
没错,他们有个“空跑”调用,用来模拟上下文窗口有多大,这样用户界面上看起来就显得更大了。

}

lol I'm done with cursor
哈哈,我受够 Cursor 了

}
More replies
More replies

Took one for the team. Appreciate running Charles for us.
替团队牺牲了。感谢你为我们运行 Charles。

}

I'm waiting for 2 things to happen :
我在等两件事发生:

  • OG Cursor Devs leaving to make a better company
    Cursor 的原始开发者们离职去创建一家更好的公司

  • this post gets deleted
    这篇帖子被删除了

}
More replies

HORRIBLE experience with cursor the last 24 hours!
过去 24 小时使用 Cursor 的体验太糟糕了!

}

100 percent they are I noticed in the past week or so.this is the fastest way to push ppl away
他们百分之百是这样,我过去一周左右就注意到了。这是把人推开的最快方式

}

wtf cursor   Cursor 搞什么鬼

}

i think an open source cursor can easily be done, its all in he prompts/vectoriing/chunking etc,, Agentic flows and chain of workflows
我认为开源的 Cursor 很容易实现,关键在于提示词/向量化/分块等,以及代理流程和工作流链

Im sure as more pricing $$ and dark patterns emerge people will build stuff
我相信随着更多定价策略和暗模式的出现,人们会开发出新的东西。

}
More replies

Man this is an amazing post. Count down? Wow
哥们儿,这帖子太牛了。倒计时?哇哦!

}
More replies

So this is the reason! I feel last days like i will switch to something else, because the waiting time is ridicilous and before it was fine even with slow mode.
原来如此!我感觉最近几天我快要换用其他工具了,因为等待时间太离谱了,而之前即使在慢速模式下也挺好的。

Cant wait to see whats happen, but if nothing i will leave next week.
等不及想看看会发生什么,但如果什么也没发生,我下周就走。

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者

Will post on X when I have some time.
有空的时候会在 X 上发帖。

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster
Profile Badge for the Achievement Top 1% Poster 前 1%发帖者
}
More replies

Cursor is horrible with context. You should consider prepare the context yourself, work with external tools (like gemini pro in ai studio), and feed back the resilts, using cursor as a simple agent to do light stuff and merge diffs. It will be much more consistent.
Cursor 在处理上下文方面表现糟糕。你应该考虑自己准备上下文,使用外部工具(如 AI Studio 中的 Gemini Pro),然后将结果反馈给 Cursor,将 Cursor 作为一个简单的 Agent 来处理轻量级任务和合并差异。这样会更加一致。

}
More replies

they 100% have been and will continue to 'soft play' with these Dark patterns. Innocuous at first.. innocent, alwmost like minor overesights or "simple mistakes"
他们百分之百一直在并将继续“软性操作”这些黑暗模式。起初看起来无伤大雅……无辜,几乎像是轻微的疏忽或“简单的错误”

But i've already seen multiple things over the last ~3 months that lead me to believe thye have a bunch more of these tricks up their sleeves.
但在过去大约三个月里,我已经看到了好几件事,让我觉得他们还有一大堆这样的花招没使出来呢。

I get it they have to make money so it makes sense but just own it.
我懂,他们得赚钱,这很合理,但请承认这一点。

}
More replies

Fastest growth, but also fastest enshitification. Investors must be proud.

}
More replies

This is 100% gonna get taken down by mods so back it up.

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster

Already did. I have a more comprehensive write up coming.

}
More replies
More replies

Oh but I was told we, the people complaining , were just idiots not knowing how to use it properly?

}
More replies

I stopped my Cursor sub, after a year.

The way they limit context and don’t let me pass full files to the model is very clear. I’m using Zed AI and VSCode with Copilot sub, there I use Cline and Roo code with the VSCode exposed LM API. I couldn’t be happier.

Cursor also has a tonne of subtle bugs compared to VS Code. I don’t have to put up with that anymore.

}
More replies

Just use Roo code. Bring your own api key free

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster

It is more about the company lying and trying to get away with it. I use various tools. including Roo. Cursor is just one of the tools.

}
More replies
More replies

I mean it’s really bad that they are hiding it and falsely advertising it. But it’s rather easy, people want a $50-$100 product for $20. 

You can get no queues and large context if you download Cline. But it quickly gets way to expensive, so we dont. 

}
More replies

it's ok vscode is coming for them like slenderman. there will be justice

}
Profile Badge for the Achievement Top 1% Poster Top 1% Poster

Welp, I can't post anymore, but I have some updates.

Cursor and team, I still don't want to do this, I am not paid, I do not have alterior motives, I want the best for the cursor product. I extracted this protobuf response from the api2.cursor.sh/aiserver.v1.AiService/AvailableModels endpoint I thought I'd post what I found. Notice the windows are well short of their reported 120k. Even if you add the 65k max output tokens it still doesn't add up. (Mod bot, this is NOT self promotion, stop blocking this.) I have looked through the rules and I am not violating them.

Going to state this again, not misinformation, pulled directly from the network traffic between my system and your systems using a simple proxy, including the url for other people to monitor and verify themselves. This is genuine curiosity. This can be verified by anyone if they like. Id love an actual explanation of how this works, rather than a PR response. Are these numbers representing the context window minus the system prompt or something? What are we missing here? These windows are significantly smaller than stated on your site, and the api2.cursor.sh/aiserver.v1.ChatService/GetPromptDryRun, (which was introduced in 0.50.x) endpoint counts tokens completely separately from the acutal chat. Providing a token count to the end users in the UI, but not actually representative of the actual chat context window?

Genuine questions here.

Here is part of the intercepted response for the AvailableModels endpoint.

models {
  name: "claude-4-sonnet"
  supports_agent: true
  degradation_status: DEGRADATION_STATUS_UNSPECIFIED
  tooltip_data {
    5: "0.5x Request"
    7: "**Claude 4 Sonnet**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *120k tokens*\n<span style=\"color:var(--vscode-editorWarning-foreground);\">Cost: 0.5x Requests</span>"
  }
  supports_thinking: false
  supports_images: true
  supports_auto_context: true
  auto_context_max_tokens: 40000
  auto_context_extended_max_tokens: 98000
  supports_max_mode: true
  client_display_name: "claude-4-sonnet"
  server_model_name: "claude-4-sonnet"
  supports_non_max_mode: true
  tooltip_data_for_max_mode {
    7: "**Claude 4 Sonnet**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *200k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  21: 0
}
models {
  name: "claude-4-sonnet-thinking"
  default_on: true
  supports_agent: true
  degradation_status: DEGRADATION_STATUS_UNSPECIFIED
  tooltip_data {
    5: "0.75x Request"
    7: "**Claude 4 Sonnet (thinking)**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *120k tokens*\n<span style=\"color:var(--vscode-editorWarning-foreground);\">Cost: 0.75x Requests</span>"
  }
  supports_thinking: true
  supports_images: true
  supports_auto_context: true
  auto_context_max_tokens: 40000
  auto_context_extended_max_tokens: 98000
  supports_max_mode: true
  client_display_name: "claude-4-sonnet-thinking"
  server_model_name: "claude-4-sonnet-thinking"
  supports_non_max_mode: true
  tooltip_data_for_max_mode {
    7: "**Claude 4 Sonnet (thinking)**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *200k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  21: 0
}
models {
  name: "claude-4-opus"
  supports_agent: true
  degradation_status: DEGRADATION_STATUS_UNSPECIFIED
  tooltip_data {
    7: "**Claude 4 Opus**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *120k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  supports_thinking: false
  supports_images: true
  supports_auto_context: true
  auto_context_max_tokens: 40000
  auto_context_extended_max_tokens: 98000
  supports_max_mode: true
  client_display_name: "claude-4-opus"
  server_model_name: "claude-4-opus"
  supports_non_max_mode: false
  tooltip_data_for_max_mode {
    7: "**Claude 4 Opus**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *200k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  21: 0
}
models {
  name: "claude-4-opus-thinking"
  default_on: true
  supports_agent: true
  degradation_status: DEGRADATION_STATUS_UNSPECIFIED
  tooltip_data {
    7: "**Claude 4 Opus (thinking)**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *120k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  supports_thinking: true
  supports_images: true
  supports_auto_context: true
  auto_context_max_tokens: 40000
  auto_context_extended_max_tokens: 98000
  supports_max_mode: true
  client_display_name: "claude-4-opus-thinking"
  server_model_name: "claude-4-opus-thinking"
  supports_non_max_mode: false
  tooltip_data_for_max_mode {
    7: "**Claude 4 Opus (thinking)**\n\nAnthropic\'s latest model, temporarily offered at a discount.\n\nContext window: *200k tokens*\nCost: [*billed per token*$(arrow-up-right)](https://docs.cursor.com/settings/models)"
  }
  21: 0
}
}
More replies