The Pros and Cons of ChatGPT Plus
Is it worth upgrading to ChatGPT Plus?
ChatGPT has enjoyed a lot of success ever since it met the public. Its free version exceeds the expectations of everyday users in terms of quality, ease of use, and versatility. But OpenAI, the company that built ChatGPT, was quick to roll out a paid version to monetize the service, as it costs money to run, and almost everyone is using it.
The premium version was surrounded with a lot of hype at launch because it gives subscribers priority access to the newest version of OpenAI's language models (GPT-4). Other ChatGPT Plus features include faster response times, better availability during peak times, and plugin support.
So, is the ChatGPT Plus subscription worth it?
The Pros of Upgrading to ChatGPT Plus
ChatGPT Plus will cost you around $20/month. However, apart from the obvious upgrades, such as access to the latest version of OpenAI's GPT language model and new features, the real benefits are more subtle. For example, GPT-4 is much better at understanding context and processing language.
1. Higher Quality Responses With GPT-4
GPT-4 is allegedly trained on more than one trillion parameters, which means it is better than GPT-3.5 at recognizing complex patterns in data to produce better responses. It also has better language processing, understanding of context, and complex problem-solving skills.
To put all of this to the test, we gave the same prompt to GPT-3.5 and GPT-4. We asked both versions how to explain quantum mechanics to someone unfamiliar with the topic. GPT-3.5 provided a brief introduction to summarize the main points of discussion.
GPT-4 was noticeably better. It got straight to the point and presented a numbered list illustrating key talking points. It immediately explained that using simple analogies is the best way to explain the topic to a layperson, and that's exactly what it did for each talking point.
So, for most prompts, it does a better job of understanding the problem and the context compared to GPT-3.5. Where you'd have to try several different prompts to get your desired output from GPT-3.5 (the free ChatGPT version), GPT-4 produces the same output in significantly fewer prompts.
But these aren't all the GPT-4 and GPT-3.5 differences: GPT-4 can take visual inputs, it's more creative, and its responses are more factually sound, too.
2. Priority Access to New Features
OpenAI is constantly working on new features to improve their language models further. If you subscribe to ChatGPT Plus, you'll get priority access to their latest releases, getting an opportunity to test them out before free users.
So, for example, when OpenAI announces a plugin that lets you give audio inputs to ChatGPT, you'd be able to access it right away—if, of course, you've signed up for the ChatGPT plugins through the subscription. Plugins are third-party software components that integrate with the platform. A good example is the browser plugin, which lets ChatGPT access the web and pull sources of information in real time.
3. Faster Response Times
Slow response times and limited availability are two of the most frustrating things about the standard version of ChatGPT. If you upgrade to the Plus version, both issues aren't a big deal. While using the Plus version, you can choose to use the default model (GPT 3.5), GPT-4, or even the legacy version that was available at launch (though this is being phased out).
While the default model is more or less the same as the legacy version in terms of responses, it is much faster. Another interesting insight is that when we asked both GPT-3.5 and GPT-4 to explain quantum mechanics (discussed above), the former was noticeably faster.
GPT-4 is slower than GPT-3.5 because it calls upon more parameters. Since it has access to more information, it takes longer to respond.
4. Better Availability
While using the standard version of ChatGPT, you've likely come across an error message that says, "ChatGPT is at capacity right now." Due to the unprecedented popularity of OpenAI's chatbot, millions of people often try to use the service at any moment. Unfortunately, OpenAI's servers can only handle so much, which is why this error message is all too common.
This lack of availability is less of a problem with the Plus version, as your access to the chatbot is prioritized during peak hours. While free users might be unable to access the service at the time, you'll never even know it's unavailable. Stable access to ChatGPT is a blessing when your workflow heavily relies on it.
The Cons of Upgrading to ChatGPT Plus
While upgrading to ChatGPT Plus surely sounds like a no-brainer, you should be aware of some downsides to the service.
1. Limited Number of Prompts
The fact that GPT-4 can produce over 25,000 words in one go is underrated, as GPT-3.5 was limited to 3,000 words/response. Although longer and higher quality responses are a huge upgrade, there is almost an equally large restriction holding GPT-4 users back: you can only send 25 messages every three hours.
This hard cap is extremely limiting, as you'll quickly find that 25 messages aren't a lot. Originally, this cap was around 100 messages, but OpenAI reduced it to take its time with scaling GPT-4. So, you have limited access to the latest GPT language model, even if you purchase the subscription.
2. Microsoft's Bing Chat Is Free and Better at Times
There are a lot of differences between ChatGPT and Bing Chat. One is essentially an AI-powered search engine, while the other is more of a "traditional" chatbot. However, this does not mean you can't use Bing Chat for the same purposes. It's free to use, can access information from the internet in real-time, doesn't have as many availability issues, and uses GPT-4 technology as well.
Accessing information directly from the internet makes Bing Chat much more versatile than ChatGPT. Unfortunately, Microsoft is cautious with how it responds to certain prompts. Sometimes Bing will refuse to respond to controversial topics. Still, it's worth trying out before you pay for ChatGPT Plus to see if it suits your needs.
3. The Free Version Is Still Good in Most Cases
If the pros outweigh the cons for you, paying $20/month for a highly versatile chatbot might not sound too bad. But while all the benefits we mentioned above are certainly attractive, it's hard to ignore the obvious—the free version is good enough for most people.
Now, if faster response times and better availability truly matter to you, those reasons are enough to upgrade. However, by incorporating the right prompting techniques with ChatGPT, you'll find that the free version is still very serviceable. This isn't really a drawback of the Plus version (bar the cost, of course); it's just that the free version is still really useful.
While Not For Everyone, ChatGPT Plus Is Incredibly Impressive
ChatGPT Plus takes an already impressive chatbot and adds massive improvements to it. While it's not a quantum leap forward just yet, it is surprising how fast these AI tools continue to grow. Whether or not the upgrade is worth it comes down to your personal needs. If you feel impressed by GPT-4, it's well worth trying out for $20.
ABOUT THE AUTHOR
Hashir Ibrahim • Staff Writer For Mac And IOS(114 Articles Published)
With over three years of in-depth experience working in technical fields, Hashir is a master content writer who loves writing about Mac and iOS at MakeUseOf.
Does ChatGPT Learn From User Conversations?
Is ChatGPT saving what you wrote in its central memory for future use?
With millions of ChatGPT users, you might wonder what OpenAI does with all its conversations. Does it constantly analyze the things you talk about with ChatGPT?
The answer to that is, yes, ChatGPT learns from user input—but not in the way that most people think. Here's an in-depth guide explaining why ChatGPT tracks conversations, how it uses them, and whether your security is compromised.
Does ChatGPT Remember Conversations?
ChatGPT doesn't take prompts at face value. It uses contextual memory to remember and reference previous inputs, ensuring relevant, consistent responses.
Take the below conversation as an example. When we asked ChatGPT for recipe ideas, it considered our previous message about peanut allergies.
Here's ChatGPT's safe recipe.
Contextual memory also lets AI execute multi-step tasks. The below image shows ChatGPT staying in character even after feeding it a new prompt.
ChatGPT can remember dozens of instructions within conversations. Its output actually improves in accuracy and precision as you provide more context. Just ensure you explain your instructions explicitly.
You should also manage your expectations because ChatGPT's contextual memory still has limitations.
ChatGPT Conversations Have Limited Memory Capacities
Contextual memory is finite. ChatGPT has limited hardware resources, so it only remembers up to specific points of current conversations. The platform forgets earlier prompts once you hit its memory capacity.
In this conversation, we instructed ChatGPT to roleplay a fictional character named Tomie.
It started answering prompts as Tomie, not ChatGPT.
Although our request worked, ChatGPT broke character after receiving a 1,000-word prompt.
OpenAI has never disclosed ChatGPT's exact limits, but rumors say it can only process 3,000 words at a time. In our experiment, ChatGPT malfunctioned after just 2,800+ words.
You can break down your prompts into two 1,500-word sets, but ChatGPT likely won't retain all your instructions. Just start another chat altogether. Otherwise, you'll have to repeat specific details several times throughout your conversation.
ChatGPT Only Remembers Topic-Relevant Inputs
ChatGPT uses contextual memory to improve output accuracy. It doesn't just retain information for the sake of collecting it. The platform almost automatically forgets irrelevant details, even if you're far from hitting the token limit.
In the below image, we try to confuse AI with various incoherent, irrelevant instructions.
We kept our combined inputs under 100 words, but ChatGPT still forgot our first instruction. It quickly broke character.
Meanwhile, ChatGPT kept roleplaying during this conversation because we only asked topic-relevant questions.
Ideally, each dialogue must follow a singular theme to maintain accurate, relevant outputs. You can still input several instructions simultaneously. Just ensure they align with the overall topic, or else ChatGPT might drop instructions that it deems irrelevant.
Training Instructions Overpower User Input
ChatGPT will always prioritize predetermined instructions over user-generated input. It stops illicit activities through restrictions. The platform rejects any prompt that it deems dangerous or damaging to others.
Take roleplay requests as examples. Although they override certain limitations on language and phrasing, you can't use them to commit illicit activities.
Of course, not all restrictions are reasonable. If rigid guidelines make it challenging to execute specific tasks, keep rewriting your prompts. Word choice and tone heavily affect outputs. You can take inspiration from the most effective, detailed prompts on GitHub.
How Does OpenAI Study User Conversations?
Contextual memory only applies to your current conversation. ChatGPT's stateless architecture treats conversations as independent instances; it can't reference information from previous ones. Starting new chats always resets the model's state.
This isn't to say that ChatGPT dumps user conversations instantly. OpenAI's terms of use state that the company collects inputs from non-API consumer services like ChatGPT and Dall-E. You can even ask for copies of your chat history.
While ChatGPT freely accesses conversations, OpenAI's privacy policy prohibits activities that might compromise users. Trainers can only use your data for product research and development.
Developers Look for Loopholes
OpenAI sifts through conversations for loopholes. It analyzes instances wherein ChatGPT demonstrates data biases, produces harmful information, or helps commit illicit activities. The platform's ethical guidelines are constantly revamped.
For instance, the first versions of ChatGPT openly answered questions about coding malware or constructing explosives. These incidents made users feel like OpenAI has no control over ChatGPT. To regain the public's trust, it trained the chatbot to reject any question that might go against its guidelines.
Trainers Collect and Analyze Data
ChatGPT uses supervised learning techniques. Although the platform remembers all inputs, it doesn't learn from them in real-time. OpenAI trainers collect and analyze them first. Doing so ensures that ChatGPT never absorbs the harmful, damaging information it receives.
Supervised learning requires more time and energy than unsupervised techniques. However, leaving AI to analyze input alone has already been proven harmful.
Take Microsoft Tay as an example—one of the times machine learning went wrong. Since it constantly analyzed tweets without developer guidance, malicious users eventually trained it to spit racist, stereotypical opinions.
Developers Constantly Watch Out for Biases
Several external factors cause biases in AI. Unconscious prejudices may arise from differences in training models, dataset errors, and poorly constructed restrictions. You'll spot them in various AI applications.
Thankfully, ChatGPT has never demonstrated discriminatory or racial biases. Perhaps the worst bias users have noticed is ChatGPT's inclination toward left-wing ideologies, according to a New York Post report. The platform more openly writes about liberal than conservative topics.
To resolve these biases, OpenAI prohibited ChatGPT from providing political insights altogether. It can only answer general facts.
Moderators Review ChatGPT's Performance
Users can provide feedback on ChatGPT's output. You'll find the thumbs-up and thumbs-down buttons on the right side of every response. The former indicates a positive reaction. After hitting the like or dislike button, a window will pop up wherein you can send feedback in your own words.
The feedback system is helpful. Just give OpenAI some time to sift through the comments. Millions of users comment on ChatGPT regularly—its developers likely prioritize grave instances of biases and harmful output generation.
Are Your ChatGPT Conversations Safe?
Considering OpenAI's privacy policies, you can rest assured that your data will remain safe. ChatGPT only uses conversations for data training. Its developers study the collected insights to improve output accuracy and reliability, not steal personal data.
With that said, no AI system's perfect. ChatGPT isn't inherently biased, but malicious individuals could still exploit its vulnerabilities, e.g., dataset errors, careless training, and security loopholes. For your protection, learn to combat these risks.
ABOUT THE AUTHOR
Jose Luansing Jr. • Staff Writer For Work & Career(58 Articles Published)
Jose Luansing Jr. is a staff writer at MUO. He has written thousands of articles on tech, freelance tools, career advancement, business, AI, and finance since 2017.
As a writer, Jose’s goal is to share advice on self-improvement and upskilling. He helps readers understand the real-life applications of various systems, plus how these support career advancement.
Recently, Jose has also been testing AI systems. He believes that AI is inherently unbiased—all hallucinations, inconsistencies, and security risks stem from humans.