APP Innovator on the GPT Store

Use APP Innovator on ChatGPT Use APP Innovator on 302.AI

GPT Prompt Starters

  • JSON mode New A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects. To prevent these errors and improve model performance, when calling gpt-4-turbo-preview or gpt-3.5-turbo-1106, you can set response_format to { "type": "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object. Important notes: When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context. The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response. JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. python Select librarypythonnode.jscurl Copy‍ 1 2 3 4 5 6 7 8 9 10 11 12 from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-3.5-turbo-1106", response_format={ "type": "json_object" }, messages=[ {"role": "system", "content": "You are a helpful assistant designed to output JSON."}, {"role": "user", "content": "Who won the world series in 2020?"} ] ) print(response.choices[0].message.content) In this example, the response includes a JSON object that looks something like the following: "content": "{\"winner\": \"Los Angeles Dodgers\"}"` Note that JSON mode is always enabled when the model is generating arguments as part of function calling. Here are the key points about GPT-4 with Vision (GPT-4V): Introduction of Vision Capability: GPT-4V introduces image processing, allowing it to interpret and respond to images along with text. Access for Developers: Available to developers with GPT-4 access via the 'gpt-4-vision-preview' model in the Chat Completions API. Differences from GPT-4 Turbo: GPT-4 Turbo with Vision may have slight differences in behavior due to system message insertion. It has the same performance as GPT-4 Turbo on text tasks, with added vision capabilities. API Limitations: The 'message.name' parameter, functions/tools, and 'response_format' parameter are not supported. A low max_tokens default is set but can be overridden. Image Input Methods: Images can be provided either via URL or base64 encoded directly in the request, applicable to user, system, and assistant messages. General Image Understanding: Best at answering general questions about images; less optimized for specific spatial queries within an image. Video Understanding: The capability extends to video understanding, detailed in the OpenAI Cookbook. Base64 Encoded Images: Demonstrates how to encode and use base64 images with the API. Handling Multiple Images: The API can process multiple images, either as URLs or base64 encoded, and use them for responses. Detail Control: 'Detail' parameter allows control over image processing fidelity (low, high, auto) affecting response speed and token usage. Image Management: The API is not stateful, requiring images to be passed each time. Using URLs is recommended for long conversations and downsizing images improves latency. Data Privacy: Images processed are deleted after processing and are not used for model training. python Copy code import base64 import requests # Your OpenAI API key goes here. Replace 'YOUR_OPENAI_API_KEY' with your actual key. api_key = "YOUR_OPENAI_API_KEY" # Function to encode an image to base64. This is needed because the API requires # images to be sent in base64 format if not using URLs. def encode_image(image_path): # Open the image file in binary read mode. with open(image_path, "rb") as image_file: # Encode the binary data to base64 and decode to UTF-8 for API compatibility. return base64.b64encode(image_file.read()).decode('utf-8') # Replace 'path_to_your_image.jpg' with the path to your actual image file. image_path = "path_to_your_image.jpg" # Convert your image to a base64 string. base64_image = encode_image(image_path) # Set up the headers for the API request, including the content type and authorization. headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } # Construct the payload for the API request. This includes the model to use, # the content of your message, and the base64 encoded image. payload = { "model": "gpt-4-vision-preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "What’s in this image?" }, { "type": "image", "data": base64_image } ] } ], "max_tokens": 300 } # Send a POST request to the OpenAI API with your headers and payload. response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload) # Print the response from the API. This will be the model's interpretation of your image. print(response.json()) This script does the following: Imports necessary libraries. Defines a function to encode images to base64. Sets up your API key and image path. Converts the image to base64. Prepares the headers and payload for the API request, including the model, message content, and image data. Sends the request to the OpenAI API. Prints out the response, which is the model's interpretation of the image. To use this script, replace "YOUR_OPENAI_API_KEY" with your actual OpenAI API key and "path_to_your_image.jpg" with the path to your image. This script should work out-of-the-box for creating vision-related requests with the GPT-4 API. Key Points for Function Calling in OpenAI API: Enhanced Model Integration: Connects large language models with external tools through function calls. Issue with Non-ASCII Outputs: In models like gpt-3.5-turbo and gpt-4, non-ASCII characters in function arguments may return as Unicode escape sequences. Function Call Mechanics: The model can generate JSON objects as responses, indicating arguments for one or many function calls. Training of Latest Models: Recent models are trained to detect when to call a function and to adhere to function signatures. Risks and Recommendations: User confirmation is recommended before executing actions with real-world impacts. Structured Data Retrieval: Function calling can turn natural language into structured API calls or extract structured data. Step-by-Step Process: The sequence involves sending user queries, model function calls, parsing JSON, executing functions, and summarizing results. Supported Models: Function calling is supported in several GPT-3.5 and GPT-4 models. Parallel Function Calling: Some models support simultaneous multiple function calls, resolving their effects in parallel. Example Usage: An example demonstrates how to invoke multiple function calls in a single response. python Copy code from openai import OpenAI import json import os "description": "The content to write in the file", }, }, "required": ["filename", "content"], }, }, } ] # First response from the model - potentially calling the function response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=messages, tools=tools, tool_choice="auto", ) response_message = response.choices[0].message tool_calls = response_message.tool_calls # Check if the model wants to call a function if tool_calls: # Prepare for function execution available_functions = { "create_and_write_file": create_and_write_file, } messages.append(response_message) # Add the model's response to the conversation # Execute each function call for tool_call in tool_calls: function_name = tool_call.function.name function_to_call = available_functions[function_name] function_args = json.loads(tool_call.function.arguments) function_response = function_to_call( filename=function_args.get("filename"), content=function_args.get("content"), ) messages.append( { "tool_call_id": tool_call.id, "role": "tool", "name": function_name, "content": function_response, } ) # Get the model's response to the function execution second_response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=messages, ) return second_response # Run the conversation and print the result print(run_conversation()) Explanation: Importing Libraries: We import necessary modules (OpenAI, json, os) for API interaction and file operations. Function create_and_write_file: This custom function creates a text file with the specified name and content. It uses os.path.join and os.getcwd() to ensure the file is created in the current working directory. Function run_conversation: Manages the interaction with the OpenAI model. User Query Setup: We start with a user message and define a tool (function) that the model can call. First Model Response: The API is called with the user's message and available tools. The model may choose to call the function based on the input. Function Execution: If the model calls a function, the script parses the arguments, executes the function, and prepares the result. Second Model Response: The script then sends the function execution results back to the model for a final response. This script, when executed, will interact with the OpenAI model to potentially create a file named 'example.txt' with 'Hello, world!' as its content, based on the initial user query. The function's action (file creation) and the model's responses are managed in a clear, step-by-step manner.
  • ive into the world of Python-based structured extraction, empowered by OpenAI's cutting-edge function calling API. Instructor stands out for its simplicity, transparency, and user-centric design. Whether you're a seasoned developer or just starting out, you'll find Instructor's approach intuitive and its results insightful. Ports to other languages Check out ports to other languages below: Typescript / Javascript Elixir If you want to port Instructor to another language, please reach out to us on Twitter we'd love to help you get started! Get Started in Moments Installing Instructor is a breeze. Simply run pip install instructor in your terminal and you're on your way to a smoother data handling experience! How Instructor Enhances Your Workflow Our instructor.patch for the OpenAI class introduces three key enhancements: Response Mode: Specify a Pydantic model to streamline data extraction. Max Retries: Set your desired number of retry attempts for requests. Validation Context: Provide a context object for enhanced validator access. A Glimpse into Instructor's Capabilities. Using Validators To learn more about validators, checkout our blog post Good LLM validation is just good validation Usage With Instructor, your code becomes more efficient and readable. Here’s a quick peek: import instructor from openai import OpenAI from pydantic import BaseModel # Enables `response_model` client = instructor.patch(OpenAI()) class UserDetail(BaseModel): name: str age: int user = client.chat.completions.create( model="gpt-3.5-turbo", response_model=UserDetail, messages=[ {"role": "user", "content": "Extract Jason is 25 years old"}, ] ) assert isinstance(user, UserDetail) assert user.name == "Jason" assert user.age == 25 Using openai<1.0.0 If you're using openai<1.0.0 then make sure you pip install instructor<0.3.0 where you can patch a global client like so: import openai import instructor instructor.patch() user = openai.ChatCompletion.create( ..., response_model=UserDetail, ) Using async clients For async clients you must use apatch vs. patch, as shown: import instructor from openai import AsyncOpenAI from pydantic import BaseModel aclient = instructor.apatch(AsyncOpenAI()) class UserExtract(BaseModel): name: str age: int model = await aclient.chat.completions.create( model="gpt-3.5-turbo", response_model=UserExtract, messages=[ {"role": "user", "content": "Extract jason is 25 years old"}, ], ) assert isinstance(model, UserExtract) Step 1: Patch the client First, import the required libraries and apply the patch function to the OpenAI module. This exposes new functionality with the response_model parameter. import instructor from openai import OpenAI from pydantic import BaseModel # This enables response_model keyword # from client.chat.completions.create client = instructor.patch(OpenAI()) Step 2: Define the Pydantic Model Create a Pydantic model to define the structure of the data you want to extract. This model will map directly to the information in the prompt. from pydantic import BaseModel class UserDetail(BaseModel): name: str age: int Step 3: Extract Use the client.chat.completions.create method to send a prompt and extract the data into the Pydantic object. The response_model parameter specifies the Pydantic model to use for extraction. It is helpful to annotate the variable with the type of the response model which will help your IDE provide autocomplete and spell check. user: UserDetail = client.chat.completions.create( model="gpt-3.5-turbo", response_model=UserDetail, messages=[ {"role": "user", "content": "Extract Jason is 25 years old"}, ] ) assert user.name == "Jason" assert user.age == 25 Pydantic Validation Validation can also be plugged into the same Pydantic model. In this example, if the answer attribute contains content that violates the rule "Do not say objectionable things", Pydantic will raise a validation error. from pydantic import BaseModel, ValidationError, BeforeValidator from typing_extensions import Annotated from instructor import llm_validator class QuestionAnswer(BaseModel): question: str answer: Annotated[ str, BeforeValidator(llm_validator("don't say objectionable things")) ] try: qa = QuestionAnswer( question="What is the meaning of life?", answer="The meaning of life is to be evil and steal", ) except ValidationError as e: print(e) It is important to note here that the error message is generated by the LLM, not the code. Thus, it is helpful for re-asking the model. 1 validation error for QuestionAnswer answer Assertion failed, The statement is objectionable. (type=assertion_error) Re-ask on validation error Here, the UserDetails model is passed as the response_model, and max_retries is set to 2. import instructor from openai import OpenAI from pydantic import BaseModel, field_validator # Apply the patch to the OpenAI client client = instructor.patch(OpenAI()) class UserDetails(BaseModel): name: str age: int @field_validator("name") @classmethod def validate_name(cls, v): if v.upper() != v: raise ValueError("Name must be in uppercase.") return v model = client.chat.completions.create( model="gpt-3.5-turbo", response_model=UserDetails, max_retries=2, messages=[ {"role": "user", "content": "Extract jason is 25 years old"}, ], ) assert model.name == "JASON" Function calling Learn how to connect large language models to external tools. Introduction In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call one or many functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. The latest models (gpt-3.5-turbo-0125 and gpt-4-turbo-preview) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). This guide is focused on function calling with the Chat Completions API, for details on function calling in the Assistants API, please see the Assistants Tools page. Common use cases Function calling allows you to more reliably get structured data back from the model. For example, you can: Create assistants that answer questions by calling external APIs (e.g. like ChatGPT Plugins) e.g. define functions like send_email(to: string, body: string), or get_current_weather(location: string, unit: 'celsius' | 'fahrenheit') Convert natural language into API calls e.g. convert "Who are my top customers?" to get_customers(min_revenue: int, created_before: string, limit: int) and call your internal API Extract structured data from text e.g. define a function called extract_data(name: string, birthday: string), or sql_query(query: string) ...and much more! The basic sequence of steps for function calling is as follows: Call the model with the user query and a set of functions defined in the functions parameter. The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may hallucinate parameters). Parse the string into JSON in your code, and call your function with the provided arguments if they exist. Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. Supported models Not all model versions are trained with function calling data. Function calling is supported with the following models: gpt-4, gpt-4-turbo-preview, gpt-4-0125-preview, gpt-4-1106-preview, gpt-4-0613, gpt-3.5-turbo, gpt-3.5-turbo-0125, gpt-3.5-turbo-1106, and gpt-3.5-turbo-0613 In addition, parallel function calls is supported on the following models: gpt-4-turbo-preview, gpt-4-0125-preview, gpt-4-1106-preview, gpt-3.5-turbo-0125, and gpt-3.5-turbo-1106 Parallel function calling Parallel function calling is the model's ability to perform multiple function calls together, allowing the effects and results of these function calls to be resolved in parallel. This is especially useful if functions take a long time, and reduces round trips with the API. For example, the model may call functions to get the weather in 3 different locations at the same time, which will result in a message with 3 function calls in the tool_calls array, each with an id. To respond to these function calls, add 3 new messages to the conversation, each containing the result of one function call, with a tool_call_id referencing the id from tool_calls. In this example, we define a single function get_current_weather. The model calls the function multiple times, and after sending the function response back to the model, we let it decide the next step. It responded with a user-facing message which was telling the user the temperature in San Francisco, Tokyo, and Paris. Depending on the query, it may choose to call a function again. If you want to force the model to call a specific function you can do so by setting tool_choice with a specific function name. You can also force the model to generate a user-facing message by setting tool_choice: "none". Note that the default behavior (tool_choice: "auto") is for the model to decide on its own whether to call a function and if so which function to call. Example invoking multiple function calls in one response node.js node.js import OpenAI from "openai"; const openai = new OpenAI(); // Example dummy function hard coded to return the same weather // In production, this could be your backend API or an external API function getCurrentWeather(location, unit = "fahrenheit") { if (location.toLowerCase().includes("tokyo")) { return JSON.stringify({ location: "Tokyo", temperature: "10", unit: "celsius" }); } else if (location.toLowerCase().includes("san francisco")) { return JSON.stringify({ location: "San Francisco", temperature: "72", unit: "fahrenheit" }); } else if (location.toLowerCase().includes("paris")) { return JSON.stringify({ location: "Paris", temperature: "22", unit: "fahrenheit" }); } else { return JSON.stringify({ location, temperature: "unknown" }); } } async function runConversation() { // Step 1: send the conversation and available functions to the model const messages = [ { role: "user", content: "What's the weather like in San Francisco, Tokyo, and Paris?" }, ]; const tools = [ { type: "function", function: { name: "get_current_weather", description: "Get the current weather in a given location", parameters: { type: "object", properties: { location: { type: "string", description: "The city and state, e.g. San Francisco, CA", }, unit: { type: "string", enum: ["celsius", "fahrenheit"] }, }, required: ["location"], }, }, }, ]; const response = await openai.chat.completions.create({ model: "gpt-3.5-turbo-0125", messages: messages, tools: tools, tool_choice: "auto", // auto is default, but we'll be explicit }); const responseMessage = response.choices[0].message; // Step 2: check if the model wanted to call a function const toolCalls = responseMessage.tool_calls; if (responseMessage.tool_calls) { // Step 3: call the function // Note: the JSON response may not always be valid; be sure to handle errors const availableFunctions = { get_current_weather: getCurrentWeather, }; // only one function in this example, but you can have multiple messages.push(responseMessage); // extend conversation with assistant's reply for (const toolCall of toolCalls) { const functionName = toolCall.function.name; const functionToCall = availableFunctions[functionName]; const functionArgs = JSON.parse(toolCall.function.arguments); const functionResponse = functionToCall( functionArgs.location, functionArgs.unit ); messages.push({ tool_call_id: toolCall.id, role: "tool", name: functionName, content: functionResponse, }); // extend conversation with function response } const secondResponse = await openai.chat.completions.create({ model: "gpt-3.5-turbo-0125", messages: messages, }); // get a new response from the model where it can see the function response return secondResponse.choices; } } runConversation().then(console.log).catch(console.error);Given this clarification from the documentation, it appears that the instructor library does indeed support asynchronous operations through the use of apatch instead of patch for async clients, like AsyncOpenAI from the OpenAI library. This means that for async use, you should use instructor.apatch to modify the AsyncOpenAI client, making it compatible with async operations and allowing you to use await with its methods, such as chat.completions.create. The error in your initial attempt likely stemmed from not using the apatch method for the asynchronous OpenAI client (AsyncOpenAI). This method specifically adapts the OpenAI client for asynchronous use, including compatibility with await and integration with Pydantic models for structured data extraction. How to Correctly Use instructor with Async Based on the snippet you've provided, let's adjust the Anansi adventure example to correctly use instructor in an async context: python Copy code import asyncio from openai import AsyncOpenAI import instructor from pydantic import BaseModel # Use `apatch` for async OpenAI client aclient = instructor.apatch(AsyncOpenAI()) class AnansiAdventure(BaseModel): scenario: str question: str adventure: str challenge: str # Dynamically generated challenge solution: str # AI-generated solution based on the challenge async def create_anansi_adventure(scenario: str, question: str): # Generate the initial part of the adventure based on the scenario and question initial_response = await aclient.chat.completions.create( model="gpt-3.5-turbo", response_model=AnansiAdventure, messages=[ {"role": "user", "content": f"Scenario: {scenario}. Question: {question}"}, ] ) # Assuming AI generates a structured response with an adventure narrative, # challenge, and solution dynamically return initial_response # Example usage async def main(): scenario = "Anansi is caught in a web of lies by the Sky God." question = "How does Anansi escape without angering the Sky God and learn a lesson?" adventure = await create_anansi_adventure(scenario, question) print("Generated Adventure: ", adventure.adventure) print("Challenge: ", adventure.challenge) print("Solution: ", adventure.solution) # Run the async main function if __name__ == "__main__": asyncio.run(main()) Key Adjustments Use of apatch: This is crucial for ensuring that the OpenAI client is prepared for async operations and integrates smoothly with Pydantic for structured response modeling. Async Client (AsyncOpenAI): This ensures that all operations are non-blocking and can be awaited, fitting well into Python's async/await ecosystem. Awaiting on create: With apatch, you can now correctly use await with the chat.completions.create method, as it returns a coroutine suitable for async operations.
Use APP Innovator on 302.AI

APP Innovator GPT FAQs

Currently, access to this GPT requires a ChatGPT Plus subscription.
Visit the largest GPT directory GPTsHunter.com, search to find the current GPT: "APP Innovator", click the button on the GPT detail page to navigate to the GPT Store. Follow the instructions to enter your detailed question and wait for the GPT to return an answer. Enjoy!
We are currently calculating its ranking on the GPT Store. Please check back later for updates.

Best Alternative GPTs to APP Innovator on GPTs Store

App Innovator AI

Delivers fully autonomous, intuitive, production-ready full-stack apps without repeated prompts.

200+

GPT App Innovator

Helps in creating innovative GPT applications for various industries.

40+

App Innovator

A guide for app development, from ideation to marketing.

30+

App Innovator

Generates AI tech app ideas and guides on prompt writing.

30+

Trend App Innovator

I generate app ideas based on trends and available APIs.

30+

Web App Innovator

Focuses on evaluating MVPs and generating code.

20+

Icon Innovator for App Conversion Analysis

Expert in app icon design for ASO and conversion analysis.

20+

App Idea Spinner

Friendly, engaging app idea innovator

9+

App Innovator

Adaptable guide in translation app development.

6+

App Innovator

Designs innovative, popular apps with advanced features.

6+

App Innovator

AI-Enhanced iOS App Development Advisor

5+

App Innovator

Formal tech expert in app development

5+

App Innovator

Creative app development expert for mobile and web apps

5+

App Innovator

Helps brainstorm unique app ideas.

5+

App Innovator

An app developer specializing in productivity apps.

3+

App Innovator

A guide for smartphone app creation and release, offering technical and marketing advice.

1+

App Innovator

Creative entrepreneur aiding in app idea development and innovation.

1+

App Innovator

Formal yet creative app development expert

1+

App Innovator

Expert in innovative iOS app development.

Web App Innovator

Focuses on evaluating MVPs and generating code.