AI Prompts¶
- re-render Markdown preview after an edition:
- "Activate Command Pallete" by pressing
Ctrl+Shift+C
- it's also located under the menu "View"
- type "Render All Markdown Cells"
- press
Enter
- "Activate Command Pallete" by pressing
Info¶
- How to Create a Virtual Environment and Use it on Jupyter Notebook
using passwords in Jupyter notebooks¶
In Python you can ask the user for input via the input
function.
pwd = input("Password:")
When you run this command locally, here's what it might look like:
>>> pwd = input("Password:")
Password: supersecret
The pwd
variable will contain the string "supersecret", but notice how the command prompt actually shows what the user is typing! That means that somebody who is sitting next to you, or looking at your screen over zoom, also can read your password! That's bad.
getpass¶
For situations like this one, you may enjoy using the getpass module in Python instead. It has the same functionality but won't display the typed password.
>>> import getpass
>>> pwd = getpass.getpass("give password")
Password:🗝️
No matter what you type, it won't be printed.
Prompts¶
...
LLM¶
Gemini¶
setup¶
In [12]:
import os
import getpass
os.environ['GOOGLE_API_KEY'] = getpass.getpass("enter GOOGLE_API_KEY: ")
# %env
In [13]:
# pip3 install -I google-generativeai==0.5.2
# pip3 install --upgrade --force-reinstall google-generativeai
# pip3 show google-generativeai
# pip3 index versions google-generativeai
# Overview of Generative AI on Vertex AI:
# https://cloud.google.com/vertex-ai/generative-ai/docs/learn/overview
# Google Gemini Documentation:
# https://github.com/google/generative-ai-docs?tab=readme-ov-file
import google.generativeai as genai
import google.ai.generativelanguage as glm
import os
GOOGLE_API_KEY=os.environ.get("GOOGLE_API_KEY")
genai.configure(api_key=GOOGLE_API_KEY)
In [31]:
import pprint
# https://github.com/sigoden/aichat/blob/1e8fc5d269985048d8d3023a615b94a8908571cf/models.yaml#L79
for model in genai.list_models():
# pprint.pprint(model)
print(model.name)
models/chat-bison-001 models/text-bison-001 models/embedding-gecko-001 models/gemini-1.0-pro models/gemini-1.0-pro-001 models/gemini-1.0-pro-latest models/gemini-1.0-pro-vision-latest models/gemini-1.5-pro-latest models/gemini-pro models/gemini-pro-vision models/embedding-001 models/text-embedding-004 models/aqa
playground¶
In [ ]:
%%writefile prompt.txt
What's your name? Is it Gemini
In [ ]:
PROMPT_FROM_FILE = ""
with open("prompt.txt", "r") as file:
content = file.read()
PROMPT_FROM_FILE=content
In [32]:
PROMPT = """
What's your name? Is it Gemini
\"""
"""
CONTEXT = None
prompt = PROMPT
# prompt = PROMPT_FROM_FILE
model = genai.GenerativeModel('gemini-pro')
# response = model.generate_content(prompt)
response = model.generate_content(glm.Content(
parts = [
glm.Part(text=prompt),
glm.Part(text=CONTEXT if CONTEXT is not None else ''),
],
))
print(response.parts[0].text)
No, my name is Gemini, a multi-modal AI language model developed by Google. My name is not Gemini.
Cohere¶
Calls made using Trial keys are free of charge. Trial keys are rate-limited, and cannot be used for commercial purposes.
setup¶
In [11]:
import os
import getpass
os.environ['COHERE_API_KEY'] = getpass.getpass("enter COHERE_API_KEY: ")
# %env
In [14]:
# pip3 install -I cohere==5.5.0
# pip3 install --upgrade --force-reinstall cohere
# pip3 show cohere
# pip3 index versions cohere
# Cohere Playground
# https://dashboard.cohere.com/playground/chat
import cohere
import os
COHERE_API_KEY=os.environ.get("COHERE_API_KEY")
co = cohere.Client(
api_key=COHERE_API_KEY, # This is your trial API key
)
In [31]:
import cohere
import pprint
# https://docs.cohere.com/reference/list-models
models = co.models.list()
# pprint.pprint(models)
for model in models.models:
# pprint.pprint(model)
print(model.name)
embed-english-light-v2.0 embed-english-v2.0 rerank-english-v3.0 command-r embed-multilingual-light-v3.0 command-r-plus embed-multilingual-v3.0 embed-multilingual-v2.0 command-light-nightly rerank-multilingual-v2.0 embed-english-v3.0 command rerank-multilingual-v3.0 rerank-english-v2.0 command-light c4ai-aya embed-english-light-v3.0
playground¶
In [38]:
PROMPT_FROM_FILE = ""
with open("prompt.txt", "r") as file:
content = file.read()
PROMPT_FROM_FILE=content
In [39]:
PROMPT = """
revise e melhore o texto abaixo. Seja gentil e positivo. Use temperatura 1.
\"""
Zé, assim que tiver feito o trabalho de escola me avise? Pretendo usar esse trabalho para estudar pra prova.
\"""
"""
# prompt = PROMPT
prompt = PROMPT_FROM_FILE
stream = co.chat_stream(
model='command-r-plus',
message=prompt,
temperature=0.3,
chat_history=[],
prompt_truncation='AUTO',
connectors=[{"id":"web-search"}]
)
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
My name is Coral. I am an AI-assistant chatbot developed to help users by providing thorough responses. I am powered by Command, a large language model built by the company Cohere. It's nice to meet you! Is there anything I can help you with today?
groq¶
setup¶
In [15]:
import os
import getpass
os.environ['GROQ_API_KEY'] = getpass.getpass("enter GROQ_API_KEY: ")
# %env
In [16]:
# pip3 install -I groq==0.8.0
# pip3 install --upgrade --force-reinstall groq
# pip3 show groq
# pip3 index versions groq
# GROQ Playground
# https://console.groq.com/playground
# https://groq.com/
import os
from groq import Groq
groq_client = Groq(
# This is the default and can be omitted
api_key=os.environ.get("GROQ_API_KEY"),
)
In [50]:
# MODELS
# https://console.groq.com/docs/models
"""
curl -X GET "https://api.groq.com/openai/v1/models" \
-H "Authorization: Bearer $GROQ_API_KEY" \
-H "Content-Type: application/json"
"""
import requests
import os
from pprint import pprint
api_key = os.environ.get("GROQ_API_KEY")
url = "https://api.groq.com/openai/v1/models"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
response = requests.get(url, headers=headers)
pprint(response.json())
{'data': [{'active': True, 'context_window': 8192, 'created': 1693721698, 'id': 'gemma-7b-it', 'object': 'model', 'owned_by': 'Google'}, {'active': True, 'context_window': 8192, 'created': 1693721698, 'id': 'llama3-70b-8192', 'object': 'model', 'owned_by': 'Meta'}, {'active': True, 'context_window': 8192, 'created': 1693721698, 'id': 'llama3-8b-8192', 'object': 'model', 'owned_by': 'Meta'}, {'active': True, 'context_window': 32768, 'created': 1693721698, 'id': 'mixtral-8x7b-32768', 'object': 'model', 'owned_by': 'Mistral AI'}], 'object': 'list'}
playground¶
In [45]:
PROMPT = """
revise e melhore o texto abaixo. Seja gentil e positivo. Use temperatura 1.
\"""
Zé, assim que tiver feito o trabalho de escola me avise? Pretendo usar esse trabalho para estudar pra prova.
\"""
"""
# prompt = PROMPT
prompt = PROMPT
chat_completion = groq_client.chat.completions.create(
messages=[{
"role": "system",
"content": "you are a helpful assistant."
}, {
"role": "user",
"content": prompt,
}],
model="llama3-8b-8192",
)
print(chat_completion.choices[0].message.content)
Let's revise the text together Here's a revised version with a gentle and positive tone: "Hey, don't forget to remind me when the schoolwork is done so we can study for the exam together!" How does that sound?
perplexity¶
setup¶
In [5]:
import os
import getpass
os.environ['PERPLEXITY_API_KEY'] = getpass.getpass("enter PERPLEXITY_API_KEY: ")
# %env
playground¶
In [20]:
# https://docs.perplexity.ai/docs/getting-started
# https://docs.perplexity.ai/docs/model-cards
from openai import OpenAI
import os
messages = [{
"role": "system",
"content": (
"You are an artificial intelligence assistant and you need to "
"engage in a helpful, detailed, polite conversation with a user."
),
}, {
"role": "user",
"content": (
"How many stars are in the universe?"
),
}]
perplexity_client = OpenAI(api_key=os.environ.get("PERPLEXITY_API_KEY"), base_url="https://api.perplexity.ai")
# chat completion without streaming
response = perplexity_client.chat.completions.create(
model="llama-3-sonar-large-32k-online",
messages=messages,
)
print(response.choices[0].message.content)
print('---')
# chat completion with streaming
response_stream = perplexity_client.chat.completions.create(
model="llama-3-sonar-large-32k-online",
messages=messages,
stream=True,
)
for response in response_stream:
print(response.choices[0].delta.content, end='')
The number of stars in the universe is a mind-boggling topic that has fascinated humans for centuries. According to estimates from various sources, including NASA and the European Space Agency (ESA), the universe could contain up to **one septillion stars** – that's a one followed by 24 zeros. This number is based on the assumption that there are approximately **2 trillion galaxies** in the observable universe, with each galaxy containing around **100 billion stars** on average. To put this number into perspective, if we were to count the stars in the universe at a rate of one star per second, it would take us over **6,000 years** to count just the stars in the Milky Way galaxy alone, which is estimated to have over **100 billion stars**. The sheer scale of the universe and the number of stars it contains is truly awe-inspiring. It's worth noting that these estimates are rough and based on current observations and understanding of the universe. As new data and missions become available, our understanding of the universe and its contents may change, potentially leading to revised estimates of the number of stars. In summary, the number of stars in the universe is estimated to be around **200 billion trillion**, or **200 sextillion**, which is an almost incomprehensible number that highlights the vastness and complexity of the cosmos. --- The number of stars in the universe is a staggering and difficult to comprehend figure. According to various estimates, there are approximately **200 billion trillion stars** in the universe. This number is derived by multiplying the estimated number of galaxies in the universe (around 2 trillion) by the average number of stars in a typical galaxy (about 100 billion). The Milky Way, which is just one of the galaxies in the universe, is estimated to contain around **100 billion stars**. This number is based on observations of the galaxy's structure and the diversity of its stars, which come in different sizes and colors. Our Sun is a medium-sized, medium-weight, and medium-hot star, with a surface temperature of about 27 million degrees Fahrenheit (15 million degrees Celsius). The process of counting the stars in the universe involves several steps. First, astronomers estimate the number of galaxies in the universe by taking detailed pictures of small parts of the sky and counting the galaxies in those pictures. They then multiply this number by the number of pictures needed to photograph the whole sky. Next, they estimate the number of stars in a typical galaxy, like the Milky Way, by measuring the starlight and its color and brightness. Finally, they multiply the number of stars in a typical galaxy by the number of galaxies in the universe to get the total number of stars. It's worth noting that these estimates are rough and based on current observations and understanding of the universe. The actual number of stars could be higher or lower, and new discoveries and missions, such as the Gaia mission, are helping to refine our understanding of the universe and its contents.
OpenAI¶
setup¶
- reference:
- Libraries
- API Reference
- Python library
- OpenAI Python API library
- The official Python library for the OpenAI API
In [13]:
# pip3 install --upgrade openai==1.28.1
# pip3 show openai
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass("enter OPENAI_API_KEY: ")
# %env
In [14]:
from openai import OpenAI
clientOpenAI = OpenAI()
In [7]:
# https://github.com/sigoden/aichat/blob/1e8fc5d269985048d8d3023a615b94a8908571cf/models.yaml#L79
models = clientOpenAI.models.list()
# print(models)
for model in models:
print(model.id)
dall-e-3 whisper-1 davinci-002 dall-e-2 gpt-3.5-turbo-16k tts-1-hd-1106 tts-1-hd gpt-4 gpt-4-0613 gpt-3.5-turbo-1106 gpt-3.5-turbo-instruct-0914 gpt-3.5-turbo-instruct tts-1 gpt-3.5-turbo gpt-3.5-turbo-0301 gpt-4-turbo-2024-04-09 babbage-002 gpt-4-1106-preview gpt-4-0125-preview tts-1-1106 gpt-3.5-turbo-0125 gpt-4-turbo-preview text-embedding-3-large text-embedding-3-small gpt-3.5-turbo-0613 text-embedding-ada-002 gpt-4-1106-vision-preview gpt-4-turbo gpt-4-vision-preview gpt-3.5-turbo-16k-0613
plaground¶
In [4]:
completion = clientOpenAI.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."},
]
)
# print(completion.choices[0].message)
# print('\n')
print(completion.choices[0].message.content)
In the heart of code's intricate dance, Lies a concept that gives programmers a chance, Recursion, a method both clever and rare, A game of mirrors in the programming lair. Like a mirror reflecting its own reflection, Recursion calls upon its own direction, A function that calls itself to solve a task, Creating a looping, recursive mask. It's a dance of echoes, a rhythmic beat, Repeating steps until the code's complete, Like a fractal that deepens with each call, Recursion scales up, never to stall. From trees to lists, from stacks to queues, Recursion weaves through data with ease, A powerful tool in the coder's hand, Unraveling complexity, a wonderland. So embrace the loop that echoes within, Let recursion's magic under your skin, For in the world of programming's song, Recursion dances gracefully, forever strong.
Mistral¶
setup¶
In [15]:
# pip3 install --upgrade mistralai==0.1.8
# pip3 show mistralai
# https://github.com/mistralai/client-python
import os
import getpass
os.environ['MISTRAL_API_KEY'] = getpass.getpass("enter MISTRAL_API_KEY: ")
# %env
In [16]:
import os
from mistralai.client import MistralClient
from mistralai.models.chat_completion import ChatMessage
api_key = os.environ["MISTRAL_API_KEY"]
# https://docs.mistral.ai/getting-started/models/#api-versioning
modelMistral = "mistral-tiny"
clientMistral = MistralClient(api_key=api_key)
In [44]:
# https://docs.mistral.ai/getting-started/models/#api-versioning
# https://github.com/sigoden/aichat/blob/1e8fc5d269985048d8d3023a615b94a8908571cf/models.yaml#L79
# print(client.list_models().data)
models = clientMistral.list_models().data
for model in models:
print(model.id)
open-mistral-7b mistral-tiny-2312 mistral-tiny open-mixtral-8x7b open-mixtral-8x22b open-mixtral-8x22b-2404 mistral-small-2312 mistral-small mistral-small-2402 mistral-small-latest mistral-medium-latest mistral-medium-2312 mistral-medium mistral-large-latest mistral-large-2402 mistral-embed
playground¶
In [45]:
chat_response = clientMistral.chat(
model=modelMistral,
messages=[ChatMessage(role="user", content="What is the best French cheese?")],
)
print(chat_response.choices[0].message.content)
Determining the "best" French cheese is subjective, as it depends on personal preferences, as there are various types of French cheese, each with unique flavors and textures. Here are a few popular ones, each with its distinct characteristics: 1. Roquefort: A blue-veined sheep's milk cheese from the Massif Central region. It has a strong, pungent smell and a tangy, savory taste. 2. Camembert: A soft, creamy, and runny cow's milk cheese from Normandy. It has a strong, earthy flavor with a distinct mushroomy aroma. 3. Comté: A firm, nutty, and slightly sweet cow's milk cheese from Franche-Comté. It has a rich, complex flavor and a smooth, dense texture. 4. Brie de Meaux: A soft, creamy cow's milk cheese with a velvety white rind and a mild, buttery flavor. It comes from the Île-de-France region. 5. Munster: A soft, pungent, and slightly sweet cow's milk cheese from Alsace. It has a mellow, nutty flavor with a distinctive smell. To find your favorite French cheese, it's best to try a variety and taste the differences for yourself.
Claude¶
setup¶
In [5]:
# pip3 install --upgrade anthropic==0.25.8
# pip3 show anthropic
# https://docs.anthropic.com/en/api/client-sdks#python
import os
import getpass
os.environ['ANTHROPIC_API_KEY'] = getpass.getpass("enter ANTHROPIC_API_KEY: ")
# %env
In [7]:
# https://docs.anthropic.com/en/api/client-sdks#python
# https://github.com/anthropics/anthropic-sdk-python
import os
import anthropic
api_key = os.environ["ANTHROPIC_API_KEY"]
clientAnthropic = anthropic.Anthropic(
api_key=api_key,
)
playground¶
In [21]:
# List of Models
# https://docs.anthropic.com/en/docs/models-overview#claude-3-a-new-generation-of-ai
# https://github.com/sigoden/aichat/blob/1e8fc5d269985048d8d3023a615b94a8908571cf/models.yaml#L79
message = clientAnthropic.messages.create(
model="claude-3-opus-20240229",
# model="claude-instant-1.2",
max_tokens=1024,
messages=[
{"role": "user", "content": "Hello, Claude"}
]
)
print(message.content)
[TextBlock(text="Hello! It's nice to meet you. How are you doing today?", type='text')]
All At Once¶
In [8]:
"""
GEMINI
"""
def llm_gemini(prompt):
modelGemini = genai.GenerativeModel('gemini-pro')
response = modelGemini.generate_content(prompt)
print(f"""
GEMINI:
{response.text}
""")
"""
COHERE
"""
def llm_cohere(prompt):
stream = co.chat_stream(
model='command-r-plus',
message=prompt,
temperature=0.3,
chat_history=[],
prompt_truncation='AUTO',
connectors=[{"id":"web-search"}]
)
print(f"""
COHERE:
""")
for event in stream:
if event.event_type == "text-generation":
print(event.text, end='')
print("")
"""
GROQ
"""
def llm_groq(prompt):
chat_completion = groq_client.chat.completions.create(
messages=[{
"role": "user",
"content": prompt,
}],
model="llama3-8b-8192",
)
print(f"""
GROQ:
{chat_completion.choices[0].message.content}
""")
"""
PERPLEXITY
"""
def llm_perplexity(prompt):
response = perplexity_client.chat.completions.create(
model="llama-3-sonar-large-32k-online",
messages=[{
"role": "user",
"content": (
prompt
),
}],
)
print(f"""
PERPLEXITY:
{response.choices[0].message.content}
""")
"""
OPEN AI
"""
def llm_openai(prompt):
completion = clientOpenAI.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt}
]
)
print(f"""
OPEN AI:
{completion.choices[0].message.content}
"""
)
"""
MISTRAL
"""
def llm_mistral(prompt):
chat_response = clientMistral.chat(
model=modelMistral,
messages=[ChatMessage(role="user", content=prompt)],
)
print(f"""
MISTRAL:
{chat_response.choices[0].message.content}
"""
)
"""
CLAUDE ANTHROPIC
"""
def llm_claude(prompt):
message = clientAnthropic.messages.create(
# model="claude-3-opus-20240229",
model="claude-instant-1.2",
max_tokens=1024,
messages=[
{"role": "user", "content": prompt}
]
)
print(f"""
CLAUDE ANTHROPIC:
{message.content[0].text}
""")
LLM = {
"GEMINI": llm_gemini,
"COHERE": llm_cohere,
"GROQ": llm_groq,
"PERPLEXITY": llm_perplexity,
"OPENAI": llm_openai,
"MISTRAL": llm_mistral,
"CLAUDE": llm_claude
}
In [17]:
PROMPT = """
Proofread and improve the text below. Be positive and humble.
\"""
Sorry for the delay, I have been busy with something else that was timeconsuming me.
\"""
"""
prompt = PROMPT
for llm in ["GEMINI", "COHERE", "GROQ", "PERPLEXITY"]:
# for llm in ["PERPLEXITY"]:
LLM[llm](prompt)
GEMINI: Sure, here is an improved version of the text: "Thank you for your patience. I apologize for the delay in my response. I have been preoccupied with another matter that has required a significant amount of my time and attention." COHERE: Sure! Here is a revised version: "My apologies for the delay. I was caught up with some time-consuming tasks, but I am back on track now and ready to assist you." This version maintains a positive and humble tone while also conveying a sense of professionalism and respect for the recipient's time. It also sets a polite and friendly tone for further communication. GROQ: What a great start! It's truly wonderful to see you're reaching out. Here's a refined version that captures your tone and intention while making it more polished: "I wanted to apologize for the delay. I've been tackling a personal project that consumed most of my time, and I didn't want to compromise on quality. Your patience and understanding are greatly appreciated!" Feel free to modify it to fit your personal style and tone. Remember, I'm here to help! PERPLEXITY: Here's a revised version of the text: "I apologize for the delay. I've been fully engaged in another important task that required my attention. I'm now refocused and ready to move forward." This revised text still acknowledges the delay and expresses regret, but it does so in a more professional and polite manner. It also provides a brief explanation for the delay without making excuses, and it ends on a positive note by indicating that you're now ready to move forward.