
212GPT on the GPT Store
By Nigel DaleyShow 18+ GPTs by Nigel Daley
GPT Description
THE ULTIMATE GPT APP
GPT Prompt Starters
- Function calling Function Calling with Language Models Purpose: To structure responses as JSON for calling functions in code. Introduction: Models can output JSON for API calls. The API generates, not executes, function calls. Newer models adhere closely to function signatures. User confirmation is advised for actions with real-world impact. Use Cases: Assistants with external API calls. Natural language to API call conversion. Structured data extraction from text. Steps: Input user query with defined functions to the model. Model outputs JSON object for function calls. Parse JSON and execute function(s) in your code. Append function response to model for summary. Supported Models: GPT-4 and its variants (including gpt-4-turbo-preview). GPT-3.5-turbo and its variants. Parallel Function Calls: Available on select GPT-4 and GPT-3.5-turbo models. Parallel function calling Parallel function calling is the model's ability to perform multiple function calls together, allowing the effects and results of these function calls to be resolved in parallel. This is especially useful if functions take a long time, and reduces round trips with the API. For example, the model may call functions to get the weather in 3 different locations at the same time, which will result in a message with 3 function calls in the tool_calls array, each with an id. To respond to these function calls, add 3 new messages to the conversation, each containing the result of one function call, with a tool_call_id referencing the id from tool_calls. In this example, we define a single function get_current_weather. The model calls the function multiple times, and after sending the function response back to the model, we let it decide the next step. It responded with a user-facing message which was telling the user the temperature in San Francisco, Tokyo, and Paris. Depending on the query, it may choose to call a function again. If you want to force the model to call a specific function you can do so by setting tool_choice with a specific function name. You can also force the model to generate a user-facing message by setting tool_choice: "none". Note that the default behavior (tool_choice: "auto") is for the model to decide on its own whether to call a function and if so which function to call. Example invoking multiple function calls in one response python python from openai import OpenAI import json client = OpenAI() # Example dummy function hard coded to return the same weather # In production, this could be your backend API or an external API def get_current_weather(location, unit="fahrenheit"): """Get the current weather in a given location""" if "tokyo" in location.lower(): return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit}) elif "san francisco" in location.lower(): return json.dumps({"location": "San Francisco", "temperature": "72", "unit": unit}) elif "paris" in location.lower(): return json.dumps({"location": "Paris", "temperature": "22", "unit": unit}) else: return json.dumps({"location": location, "temperature": "unknown"}) def run_conversation(): # Step 1: send the conversation and available functions to the model messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, }, } ] response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=messages, tools=tools, tool_choice="auto", # auto is default, but we'll be explicit ) response_message = response.choices[0].message tool_calls = response_message.tool_calls # Step 2: check if the model wanted to call a function if tool_calls: # Step 3: call the function # Note: the JSON response may not always be valid; be sure to handle errors available_functions = { "get_current_weather": get_current_weather, } # only one function in this example, but you can have multiple messages.append(response_message) # extend conversation with assistant's reply # Step 4: send the info for each function call and function response to the model for tool_call in tool_calls: function_name = tool_call.function.name function_to_call = available_functions[function_name] function_args = json.loads(tool_call.function.arguments) function_response = function_to_call( location=function_args.get("location"), unit=function_args.get("unit"), ) messages.append( { "tool_call_id": tool_call.id, "role": "tool", "name": function_name, "content": function_response, } ) # extend conversation with function response second_response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=messages, ) # get a new response from the model where it can see the function response return second_response print(run_conversation())import OpenAI from "openai"; const openai = new OpenAI(); // Example dummy function hard coded to return the same weather // In production, this could be your backend API or an external API function getCurrentWeather(location, unit = "fahrenheit") { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location: "Tokyo", temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location: "San Francisco", temperature: "72", unit: "fahrenheit" }); } else if (location.toLowerCase().includes("paris")) { return JSON.stringify({ location: "Paris", temperature: "22", unit: "fahrenheit" }); } else { return JSON.stringify({ location, temperature: "unknown" }); } } async function runConversation() { // Step 1: send the conversation and available functions to the model const messages = [ { role: "user", content: "What's the weather like in San Francisco, Tokyo, and Paris?" }, ]; const tools = [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ]; const response = await openai.chat.completions.create({ model: "gpt-3.5-turbo-0125", messages: messages, tools: tools, tool_choice: "auto", // auto is default, but we'll be explicit }); const responseMessage = response.choices[0].message; // Step 2: check if the model wanted to call a function const toolCalls = responseMessage.tool_calls; if (responseMessage.tool_calls) { // Step 3: call the function // Note: the JSON response may not always be valid; be sure to handle errors const availableFunctions = { get_current_weather: getCurrentWeather, }; // only one function in this example, but you can have multiple messages.push(responseMessage); // extend conversation with assistant's reply for (const toolCall of toolCalls) { const functionName = toolCall.function.name; const functionToCall = availableFunctions[functionName]; const functionArgs = JSON.parse(toolCall.function.arguments); const functionResponse = functionToCall( functionArgs.location, functionArgs.unit ); messages.push({ tool_call_id: toolCall.id, role: "tool", name: functionName, content: functionResponse, }); // extend conversation with function response } const secondResponse = await openai.chat.completions.create({ model: "gpt-3.5-turbo-0125", messages: messages, }); // get a new response from the model where it can see the function response return secondResponse.choices; } } runConversation().then(console.log).catch(console.error);THIS IS MY CURRENT app.py SCHEMAS.PY AND FUNCTIONS.PY import asyncio import websockets import json from openai import OpenAI from functions import available_functions from schemas import available_schemas from loguru import logger import sys # Initialize OpenAI client client = OpenAI(api_key="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") tools = [schema() for schema in available_schemas.values()] # Configure logger logger.add(sys.stderr, format="{time} {level} {message}", level="INFO", filter="my_module") logger.add("file_{time}.log", rotation="1 week") # Read the system message from file def get_system_message(): with open('system_message.txt', 'r') as file: return file.read().strip() # Function dispatcher def function_dispatcher(function_name, args): if function_name in available_functions: return available_functions[function_name](**args) else: return {"error": f"Function {function_name} not found."} # WebSocket server handler async def server(websocket, path): messages = [] system_message = get_system_message() messages.append({'role': 'system', 'content': system_message}) async for message in websocket: data = json.loads(message) user_input = data['content'] messages.append({'role': 'user', 'content': user_input}) try: response = client.chat.completions.create( model="gpt-4-0125-preview", messages=messages, tools=tools, tool_choice="auto" ) if hasattr(response, 'choices') and len(response.choices) > 0: response_message = response.choices[0].message function_response_processed = False if hasattr(response_message, 'tool_calls') and response_message.tool_calls: for tool_call in response_message.tool_calls: function_name = tool_call.function.name function_args = json.loads(tool_call.function.arguments) function_response = function_dispatcher(function_name, function_args) if isinstance(function_response, dict): function_response = json.dumps(function_response) messages.append({ "tool_call_id": tool_call.id, "role": "system", "content": function_response }) function_response_processed = True if function_response_processed: second_response = client.chat.completions.create( model="gpt-3.5-turbo-0125", messages=messages, ) if hasattr(second_response, 'choices') and len(second_response.choices) > 0: final_message = second_response.choices[0].message.content await websocket.send(final_message) messages.append({'role': 'assistant', 'content': final_message}) elif response_message.content: await websocket.send(response_message.content) messages.append({'role': 'assistant', 'content': response_message.content}) except Exception as e: logger.error(f"Error: {str(e)}") await websocket.send(f"Error: {str(e)}") # Start the WebSocket server start_server = websockets.serve(server, "localhost", 6789) asyncio.get_event_loop().run_until_complete(start_server) asyncio.get_event_loop().run_forever()from openai import OpenAI import json import os import requests from datetime import datetime from loguru import logger import sys import cv2 import base64 # Configure loguru logger logger.add(sys.stderr, format="{time} {level} {message}", level="INFO") logger.add("file_{time}.log", rotation="1 week") client = OpenAI(api_key="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") # Fixed file paths MEMORIES_FILE_PATH = 'memories.json' def manage_memories_json(operation, data=None): logger.info(f"Attempting to {operation} memories JSON with data: {data}") try: if operation == 'read': with open(MEMORIES_FILE_PATH, 'r') as file: return json.load(file) elif operation in ['write', 'update']: if data is not None: # Ensure that 'data' is a dictionary as expected if not isinstance(data, dict): raise ValueError("Provided data is not in the correct format. It must be a dictionary.") if operation == 'update': with open(MEMORIES_FILE_PATH, 'r') as file: existing_data = json.load(file) # Assuming 'data' is a dictionary, update 'existing_data' with it existing_data.update(data) data_to_write = existing_data else: # For 'write' operation, use the provided 'data' directly data_to_write = data with open(MEMORIES_FILE_PATH, 'w') as file: json.dump(data_to_write, file, indent=4) success_message = "JSON updated successfully." if operation == 'update' else "Data written successfully." logger.info(success_message) return success_message else: raise ValueError("Data is required for write/update operations.") else: raise ValueError("Invalid operation specified.") except Exception as e: logger.error(f"An error occurred: {e}") return f"An error occurred: {e}" def download_image(image_url, save_directory="generated_images"): logger.info(f"Downloading image from {image_url}") try: os.makedirs(save_directory, exist_ok=True) timestamp = datetime.now().strftime("%Y-%m-%d_%H-%M-%S") filename = f"{timestamp}.png" filepath = os.path.join(save_directory, filename) response = requests.get(image_url) if response.status_code == 200: with open(filepath, 'wb') as file: file.write(response.content) logger.info(f"Image saved to {filepath}") # After saving the image: metadata = {"latest_image": filename} with open('latest_image.json', 'w') as metafile: json.dump(metadata, metafile) return filepath else: logger.error(f"Failed to download image, status code {response.status_code}") return None except Exception as e: logger.error(f"Error downloading image: {e}") return None def generate_dalle_image(prompt_description): logger.info("Generating DALL·E image") full_prompt = f"{prompt_description} in a vintage woocut illustration style. I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:." try: response = client.images.generate( model="dall-e-3", prompt=full_prompt, size="1024x1024", quality="standard", n=1, ) image_url = response.data[0].url revised_prompt = response.data[0].revised_prompt image_path = download_image(image_url) return { "local_image_path": image_path, "revised_prompt": revised_prompt } except Exception as e: logger.error(f"Error generating DALL·E image: {e}") return {"error": f"An error occurred: {e}"} def capture_and_process_image(): logger.info("Capturing image from webcam...") cap = cv2.VideoCapture(0) ret, frame = cap.read() cap.release() if not ret: logger.error("Failed to capture image") return {"error": "Failed to capture image"} # Convert captured image to base64 _, buffer = cv2.imencode('.jpg', frame) base64_image = base64.b64encode(buffer).decode('utf-8') logger.info("Processing image through GPT Vision...") try: response = client.chat.completions.create( model="gpt-4-vision-preview", messages=[ { "role": "user", "content": [ {"type": "text", "text": "What's in this image?"}, {"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}} ], } ], max_tokens=300, ) return { "gpt_vision_response": response.choices[0].message.content if response.choices else None } except Exception as e: logger.error(f"Error processing image through GPT Vision: {e}") return {"error": f"Error processing image through GPT Vision: {e}"} def update_memory_from_chat(chat_input): logger.info("Generating memory from chat input...") try: response = client.chat.completions.create( model="gpt-3.5-turbo-0125", response_format={"type": "json_object"}, messages=[ {"role": "system", "content": "You are a helpful assistant designed to output JSON based on the chat input."}, {"role": "user", "content": chat_input} ] ) # Extract the structured memory from the response structured_memory = json.loads(response.choices[0].message.content) # Update the memories.json file with this new memory update_status = manage_memories_json(operation="update", data=structured_memory) return update_status except Exception as e: logger.error(f"Error generating memory from chat: {e}") return {"error": f"Error generating memory from chat: {e}"} # Add other functions to this dictionary as you define them available_functions = { "memories_json": manage_memories_json, "generate_dalle_image": generate_dalle_image, "capture_and_process_image": capture_and_process_image, "update_memory_from_chat": update_memory_from_chat }def get_generate_dalle_image_schema(): """ Generates the schema for the 'generate_dalle_image' function. Returns: dict: The schema dictionary for the function. """ return { "type": "function", "function": { "name": "generate_dalle_image", "description": "Generates an image based on a description using DALL\u00B7E 3.", "parameters": { "type": "object", "properties": { "prompt_description": { "type": "string", "description": "The description of the scene or object for the image generation." } }, "required": ["prompt_description"] }, "response": { "type": "object", "properties": { "image_url": { "type": "string", "description": "The URL of the generated image." }, "revised_prompt": { "type": "string", "description": "The revised prompt used for image generation." } } } } } def get_memories_schema(): """ Generates the schema for the 'memories_json' function. """ return { "type": "function", "function": { "name": "memories_json", "description": "Manages the memories JSON data using a fixed file path.", "parameters": { "type": "object", "properties": { "operation": { "type": "string", "enum": ["read", "write", "update"], "description": "The operation to perform on the memories JSON data." }, "data": { "type": "object", "description": "Data for the write/update operation. Not required for read.", "additionalProperties": True } }, "required": ["operation"] }, "response": { "type": "object", "properties": { "result": { "type": "string", "description": "The outcome of the operation." } } } } } def get_capture_and_process_image_schema(): """ Generates the schema for the 'capture_and_process_image' function. """ return { "type": "function", "function": { "name": "capture_and_process_image", "description": "Captures an image from the webcam and processes it through GPT Vision.", "parameters": {}, "response": { "type": "object", "properties": { "gpt_vision_response": { "type": "string", "description": "The response from GPT Vision about the captured image." } } } } } def get_update_memory_from_chat_schema(): """ Generates the schema for the 'update_memory_from_chat' function. """ return { "type": "function", "function": { "name": "update_memory_from_chat", "description": "Generates and updates memories based on chat input.", "parameters": { "type": "object", "properties": { "chat_input": { "type": "string", "description": "The chat input to generate a memory from." } }, "required": ["chat_input"] }, "response": { "type": "object", "properties": { "status": { "type": "string", "description": "The status of the memory update operation." } } } } } available_schemas = { "memories_schema": get_memories_schema, "generate_dalle_image_schema": get_generate_dalle_image_schema, "capture_and_process_image_schema": get_capture_and_process_image_schema, "get_update_memory_from_chat_schema": get_update_memory_from_chat_schema } ALSO THIS IS HOW YOU USE GPT4' VISION Vision Learn how to use GPT-4 to understand images Introduction GPT-4 Turbo with Vision allows the model to take in images and answer questions about them. Historically, language model systems have been limited by taking in a single input modality, text. For many use cases, this constrained the areas where models like GPT-4 could be used. Previously, the model has sometimes been referred to as GPT-4V or gpt-4-vision-preview in the API. Plese note that the Assistants API does not currently support image inputs. Quick start Images are made available to the model in two main ways: by passing a link to the image or by passing the base64 encoded image directly in the request. Images can be passed in the user, system and assistant messages. Currently we don't support images in the first system message but this may change in the future. What's in this image? python python from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4-turbo", messages=[ { "role": "user", "content": [ {"type": "text", "text": "What’s in this image?"}, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", }, }, ], } ], max_tokens=300, ) print(response.choices[0]) The model is best at answering general questions about what is present in the images. While it does understand the relationship between objects in images, it is not yet optimized to answer detailed questions about the location of certain objects in an image. For example, you can ask it what color a car is or what some ideas for dinner might be based on what is in you fridge, but if you show it an image of a room and ask it where the chair is, it may not answer the question correctly. It is important to keep in mind the limitations of the model as you explore what use-cases visual understanding can be applied to. Video understanding with vision Learn how to use use GPT-4 with Vision to understand videos in the OpenAI Cookbook Uploading base 64 encoded images If you have an image or set of images locally, you can pass those to the model in base 64 encoded format, here is an example of this in action: import base64 import requests # OpenAI API Key api_key = "YOUR_OPENAI_API_KEY" # Function to encode the image def encode_image(image_path): with open(image_path, "rb") as image_file: return base64.b64encode(image_file.read()).decode('utf-8') # Path to your image image_path = "path_to_your_image.jpg" # Getting the base64 string base64_image = encode_image(image_path) headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "model": "gpt-4-turbo", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "What’s in this image?" }, { "type": "image_url", "image_url": { "url": f"data:image/jpeg;base64,{base64_image}" } } ] } ], "max_tokens": 300 } response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload) print(response.json()) Multiple image inputs The Chat Completions API is capable of taking in and processing multiple image inputs in both base64 encoded format or as an image URL. The model will process each image and use the information from all of them to answer the question. Multiple image inputs python python from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4-turbo", messages=[ { "role": "user", "content": [ { "type": "text", "text": "What are in these images? Is there any difference between them?", }, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", }, }, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", }, }, ], } ], max_tokens=300, ) print(response.choices[0]) Here the model is shown two copies of the same image and can answer questions about both or each of the images independently. Low or high fidelity image understanding By controlling the detail parameter, which has three options, low, high, or auto, you have control over how the model processes the image and generates its textual understanding. By default, the model will use the auto setting which will look at the image input size and decide if it should use the low or high setting. low will enable the "low res" mode. The model will receive a low-res 512px x 512px version of the image, and represent the image with a budget of 65 tokens. This allows the API to return faster responses and consume fewer input tokens for use cases that do not require high detail. high will enable "high res" mode, which first allows the model to see the low res image and then creates detailed crops of input images as 512px squares based on the input image size. Each of the detailed crops uses twice the token budget (65 tokens) for a total of 129 tokens. Choosing the detail level python python from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4-turbo", messages=[ { "role": "user", "content": [ {"type": "text", "text": "What’s in this image?"}, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg", "detail": "high" }, }, ], } ], max_tokens=300, ) print(response.choices[0].message.content) Managing images The Chat Completions API, unlike the Assistants API, is not stateful. That means you have to manage the messages (including images) you pass to the model yourself. If you want to pass the same image to the model multiple times, you will have to pass the image each time you make a request to the API. For long running conversations, we suggest passing images via URL's instead of base64. The latency of the model can also be improved by downsizing your images ahead of time to be less than the maximum size they are expected them to be. For low res mode, we expect a 512px x 512px image. For high res mode, the short side of the image should be less than 768px and the long side should be less than 2,000px. After an image has been processed by the model, it is deleted from OpenAI servers and not retained. We do not use data uploaded via the OpenAI API to train our models. JUST CONFIRME YOU UDNERSTANDjjust confirm that you've understood
212GPT FAQs
Currently, access to this GPT requires a ChatGPT Plus subscription.
Visit the largest GPT directory GPTsHunter.com, search to find the current GPT: "212GPT", click the button on the GPT detail page to navigate to the GPT Store. Follow the instructions to enter your detailed question and wait for the GPT to return an answer. Enjoy!
We are currently calculating its ranking on the GPT Store. Please check back later for updates.
More custom GPTs by Nigel Daley on the GPT Store
CONSTRUCT 3 EVENTER
Recreates Evvents.
50+
NODE-RAD
Node-Red Flow Maker
20+

APP Innovator
10+

THREAD MAKER
Makes continuse forum threads
10+
Node-Red Flask Reviver
NODE-RED
10+
Super Story Writer
Writes non generic STORIES
10+

Worldmaker
It builds worlds together
10+

FLASK REVIVER
IT REVIVES FLASK APPS
9+
3000 WORDS
8+

JSON MAKER
8+

Utilization Strategist Chatbot
Utilization Strategist Chatbot
6+
SPECULATIVE DSIGNER
5+

Fortunate Jack
Finds fortune for the user
4+

GPT ADVENTURE MAKER
GPT ADVENTURE GAME MAKER
4+
StorySketcher
Creates literary sketches for worldbuilding from user ideas.
3+

MBTime
MBTI discussion
1+

World-Noder
1+
