Pixelchat Documentation
  • Welcome to PixelChat
  • Product Guides
    • Creating Chatbots
    • Chatbot Attributes
      • Name
      • Title
      • Greeting
      • Personality
      • Visibilty
      • Definition Visibilty
      • Avatar
    • Advance Attributes
      • Scenario
      • Example Dialogue
    • Tokens And AI Model
  • Premium Features
    • Multiple User Personas
    • Conversation Images
    • Semantic Memory 2.0
  • Community Guidelines
  • Advance
    • Image Prompt Guide
    • Generation Settings
  • Support
Powered by GitBook
On this page
  • Get a Taste Tier
  • True Supporter Tier
  • I'm All In Tier

Premium Features

Get a Taste Tier

Skip The Waiting Line

Our waiting room allows limiting how many people can use the platform at the same time and allows us to make sure enough hardware is provisioned to support our active users. We can offer our service for free thanks to premium subscribers that are willing to support financially the platform. This benefit let you skip the waiting line every time you want to chat with our chatbots.

Memory Manager

While you don't get access to Semantic Memory 2.0, you will get access to the Memory Manager that let you add memories that you believe are important to your conversation. These memories must be added manually and are not created automatically.

True Supporter Tier

4K Context (Memory)

When you chat with our bots, we generate a text prompt that is composed of the bot personality and example dialogues and a portion of your conversation history. The longer the prompt, the more server resources it uses. All of the models available support 4096 tokens (words). Different models have different tokens support but we limit that to 2k for Free and Get A Taste, True supporters can enjoe upto 4k limit.

When calculating total tokens (words) we need to account for both input and output, so assuming an allowed output of 180 tokens and a bot definition of 900 tokens, in that example, our free tier users would have 1000 tokens (words) to fit a few of the last turns of their conversation.

In that same example, premium subscribers would have 3048 tokens (words) to fit the conversation, so that means 3x more of the conversation history can be included in the prompt.

That means the response can better leverage what was discussed before.

Longer Responses

Thanks to the 4096 tokens, True Supporters, and I'm All In tiers benefit from a max response of up to 300 tokens instead of the default 180. This means less often will the response appear to be incomplete by having been truncated.

Semantic Memory

Even with 4K Context, because we need to fit our prompt within the 4096 tokens that we have available, we can only include a portion of your conversation history. With semantic memory, we try to find a semantically (based on meaning) relevant portion of your previous conversation and include these tidbits into the prompt, even if they are not the most recent things discussed.

An example of this would be if you were discussing the details of a particular book and then shifted the conversation to a different topic, like music. If later in the conversation you refer back to the book, even if several turns have occurred since the book was last mentioned, the messages the most relevant about the book would get added to the prompt.

The advantage of semantic memory is that it doesn’t solely rely on the recentness of the conversation but also its relevance.

Semantic Memory 2.0

Conversation Images

As a True Supporter, you'll have the capability to generate images on all public trained chatbots; however, the ability to train new or private chatbots to create images in conversation requires the I'm All In tier.

I'm All In Tier

Priority Generation Queue

This benefit ensures that your requests are given priority over other users. Whenever you initiate a conversation or send a message, your request is placed at the front of the queue for immediate processing. This results in faster response times and a smoother chatting experience, especially during peak usage hours or when server demand is high.

Conversation Images on Private Chatbots

You can train the AI on your private chatbots images so that they can generate conversation images.

Generation Settings

You can experiment by controlling inference temperature, top_p, and top_k. This is for the most advanced users that like to experiment.

Access to Advanced Models

You get access to over 10 different models, each designed for specific needs and preferences. These models range from 8 billion to 141 billion parameters. For instance, the WizardLM-141B model is 17 times more powerful than our default model, offering unmatched performance and capabilities.

These models are optimized for various purposes, allowing you to choose the one that best suits your role-play style and taste.

For a limited time, you'll get to experiment with our very powerful model, WizardLM-141B.

Why is WizardLM-2 special?

  • Multilingual Capabilities: Fluent in English, French, Italian, German, and Spanish. It'll beat all the models that you have tried so far.

  • Optimized for Reasoning: WizardLM-2 is optimized for reasoning, which makes it perfect for engaging and deep role-playing.

8k Context (Memory)

You get access to 8192 tokens, doubling your bot memory over what can fit in 4K context!

PreviousTokens And AI ModelNextMultiple User Personas

Last updated 9 months ago

Semantic Memory 2.0 is now available to True Supporters. Please read about it

This new feature allows generating images within your conversation. Our AI will use the bot image, bot definition and the last turn of your conversation to generate images on request.

here
You can read more about this feature.