One interface, many LLMs

One interface, many LLMs

One interface, many LLMs

Invisibility offers a unified interface for accessing multiple advanced AI models

Invisibility offers a unified interface for accessing multiple advanced AI models

Written by

Published on

Tye Daniel

Tye Daniel

Apr 20, 2024

Apr 20, 2024

The world of experienced a significant shift with the introduction of ChatGPT. OpenAI's innovative web app achieved an unprecedented milestone never achieved before by attracting 100 million users within a mere two months of its release. Now, software companies across the spectrum, regardless of their size, have been starting to formulate their own AI strategies. Invisibility aims to assist the masses by offering the fastest Experience to use Large Language models like ChatGPT on Mac. Invisibility utilizes more than just ChatGPT, i.inc offers Claude, Gemini, Perplexity and many other models, all under one subscription.

Why support different models?

It's important to understand the reasoning behind supporting multiple AI models. The majority of applications tend to rely on a single model, often opting for OpenAI's offerings, such as ChatGPT. When GPT-4 was initially released, it seemed to have it all: advanced intelligence & reasoning, impressive speed, you could give it a boatload of text, and even the capability to process visual input in the form of images, rather than being limited to text alone.

Over time however, the AI landscape underwent a rapid transformation. New models began to emerge at an unprecedented pace, each bringing its own set of unique advantages to the table. Recently Anthropic's best in Class LLM, Claude Opus has set the benchmark for the best model. It's become clear from speaking to our users that they wanted access to a broader range of models. Taking this feedback to heart, we've determined which models would be most beneficial and practical to integrate into Invisibility.

Comparing AI Models

When evaluating AI models, three key factors come into play: speed, differentiated intelligence, and context window. Each aspect has its own implications and trade-offs.

Speed is often desired, but more intelligent models with more parameters require more resources, impacting performance at scale. Intelligence, while not true reasoning or general knowledge, varies based on dataset. Lastly, the larger the context window the slower the response.

Finding the perfect balance is crucial, as there is no one-size-fits-all solution. Selecting the right model involves understanding the trade-offs and aligning them with any project requirements.

One native interface, all the best models

When evaluating different AI models, we at Invisibility, sought to identify the unique strengths and capabilities of each one. We also recognize the importance of empowering users to select the most suitable model for their specific requirements. Invisibility incorporates more comprehensive information about each model's characteristics and performance. We do this by adding icons to quickly visualize and easily convey the capabilities at a glance.

Perplexity

You may be familiar with ChatGPT and its functionalities, yet there are other innovative models worth exploring. Take Perplexity's model, for example, which is tailored for answering questions while leveraging the latest online information. Using Invisibility, pose questions and get informed responses directly from Perplexity.

Perplexity believes that knowledge should be universally accessible. Using Invisibility, you can quickly gain insights anywhere and anytime on your Mac.

Anthropic

Anthropic has recently unveiled their Claude 3 series, which has quickly become a sensation. This series features three distinct models: Haiku, Sonnet, and Opus, each equipped with a substantial context window capable of 200,000 tokens, enough to process an entire book or a large PDF. Notably, Opus has achieved the top position on the Arena leaderboard as the "best" model, establishing itself as the premier model for tackling intricate tasks with impressive ease and fluency.

Groq

The advancement of open-source models has been swift, with many experts touting them as the next big thing in large language models (LLMs). Known for its extremely fast output, Groq leads the pack as the fastest model inference engine available. This engine drives both Llama and Mixtral 8x7b, rendering these models ideal for accelerating routine AI tasks through rapid command processing.

What’s to come?

AI Commands excel at recycling prompts, and we aim to introduce similar capabilities for your AI Chats such as auto-filling prompt suggestions based on previously used inputs. This innovation will enable you to repurpose models and system instructions among others, helping you create a personalized collection of assistants tailored to your specific needs.

Integrating various models into a unified interface is crucial for serving as the AI user interface. We hope you find all the models beneficial! Have we overlooked a model with a unique "superpower" you'd like to see in Invisibility? Let us know!

© Invisibility Inc. 2024. All rights reserved.

© Invisibility Inc. 2024. All rights reserved.

© Invisibility Inc. 2024. All rights reserved.