Upgraded agents with PraisonAI. the --agents flag will now CREATE an AI agent for you and then perform a task. Enjoy
This commit is contained in:
parent
24f44b41f2
commit
eafc2df48c
26
README.md
26
README.md
@ -209,8 +209,9 @@ Once you have it all set up, here's how to use it.
|
|||||||
`fabric -h`
|
`fabric -h`
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
usage: fabric [-h] [--text TEXT] [--copy] [--agents {trip_planner,ApiKeys}] [--output [OUTPUT]] [--gui] [--stream] [--list] [--update] [--pattern PATTERN] [--setup] [--changeDefaultModel CHANGEDEFAULTMODEL]
|
usage: fabric [-h] [--text TEXT] [--copy] [--agents] [--output [OUTPUT]] [--gui] [--stream] [--list] [--temp TEMP] [--top_p TOP_P] [--frequency_penalty FREQUENCY_PENALTY]
|
||||||
[--model MODEL] [--listmodels] [--remoteOllamaServer REMOTEOLLAMASERVER] [--context]
|
[--presence_penalty PRESENCE_PENALTY] [--update] [--pattern PATTERN] [--setup] [--changeDefaultModel CHANGEDEFAULTMODEL] [--model MODEL] [--listmodels]
|
||||||
|
[--remoteOllamaServer REMOTEOLLAMASERVER] [--context]
|
||||||
|
|
||||||
An open source framework for augmenting humans using AI.
|
An open source framework for augmenting humans using AI.
|
||||||
|
|
||||||
@ -218,13 +219,18 @@ options:
|
|||||||
-h, --help show this help message and exit
|
-h, --help show this help message and exit
|
||||||
--text TEXT, -t TEXT Text to extract summary from
|
--text TEXT, -t TEXT Text to extract summary from
|
||||||
--copy, -C Copy the response to the clipboard
|
--copy, -C Copy the response to the clipboard
|
||||||
--agents {trip_planner,ApiKeys}, -a {trip_planner,ApiKeys}
|
--agents, -a Use praisonAI to create an AI agent and then use it. ex: 'write me a movie script'
|
||||||
Use an AI agent to help you with a task. Acceptable values are 'trip_planner' or 'ApiKeys'. This option cannot be used with any other flag.
|
|
||||||
--output [OUTPUT], -o [OUTPUT]
|
--output [OUTPUT], -o [OUTPUT]
|
||||||
Save the response to a file
|
Save the response to a file
|
||||||
--gui Use the GUI (Node and npm need to be installed)
|
--gui Use the GUI (Node and npm need to be installed)
|
||||||
--stream, -s Use this option if you want to see the results in realtime. NOTE: You will not be able to pipe the output into another command.
|
--stream, -s Use this option if you want to see the results in realtime. NOTE: You will not be able to pipe the output into another command.
|
||||||
--list, -l List available patterns
|
--list, -l List available patterns
|
||||||
|
--temp TEMP set the temperature for the model. Default is 0
|
||||||
|
--top_p TOP_P set the top_p for the model. Default is 1
|
||||||
|
--frequency_penalty FREQUENCY_PENALTY
|
||||||
|
set the frequency penalty for the model. Default is 0.1
|
||||||
|
--presence_penalty PRESENCE_PENALTY
|
||||||
|
set the presence penalty for the model. Default is 0.1
|
||||||
--update, -u Update patterns. NOTE: This will revert the default model to gpt4-turbo. please run --changeDefaultModel to once again set default model
|
--update, -u Update patterns. NOTE: This will revert the default model to gpt4-turbo. please run --changeDefaultModel to once again set default model
|
||||||
--pattern PATTERN, -p PATTERN
|
--pattern PATTERN, -p PATTERN
|
||||||
The pattern (prompt) to use
|
The pattern (prompt) to use
|
||||||
@ -463,7 +469,7 @@ The content features a conversation between two individuals discussing various t
|
|||||||
|
|
||||||
You can also use Custom Patterns with Fabric, meaning Patterns you keep locally and don't upload to Fabric.
|
You can also use Custom Patterns with Fabric, meaning Patterns you keep locally and don't upload to Fabric.
|
||||||
|
|
||||||
One possible place to store them is `~/.config/custom-fabric-patterns`.
|
One possible place to store PraisonAI with fabric. For more information about this amazing project please visit https://github.com/MervinPraison/PraisonAIthem is `~/.config/custom-fabric-patterns`.
|
||||||
|
|
||||||
Then when you want to use them, simply copy them into `~/.config/fabric/patterns`.
|
Then when you want to use them, simply copy them into `~/.config/fabric/patterns`.
|
||||||
|
|
||||||
@ -477,6 +483,16 @@ Now you can run them with:
|
|||||||
pbpaste | fabric -p your_custom_pattern
|
pbpaste | fabric -p your_custom_pattern
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Agents
|
||||||
|
|
||||||
|
NEW FEATURE! We have incorporated PraisonAI with fabric. For more information about this amazing project please visit https://github.com/MervinPraison/PraisonAI. This feature CREATES AI agents and then uses them to perform a task
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo "Search for recent articles about the future of AI and write me a 500 word essay on the findings" | fabric --agents
|
||||||
|
```
|
||||||
|
|
||||||
|
This feature works with all openai and ollama models but does NOT work with claude. You can specify your model with the -m flag
|
||||||
|
|
||||||
## Helper Apps
|
## Helper Apps
|
||||||
|
|
||||||
These are helper tools to work with Fabric. Examples include things like getting transcripts from media files, getting metadata about media, etc.
|
These are helper tools to work with Fabric. Examples include things like getting transcripts from media files, getting metadata about media, etc.
|
||||||
|
BIN
db/chroma.sqlite3
Normal file
BIN
db/chroma.sqlite3
Normal file
Binary file not shown.
@ -16,8 +16,8 @@ def main():
|
|||||||
"--copy", "-C", help="Copy the response to the clipboard", action="store_true"
|
"--copy", "-C", help="Copy the response to the clipboard", action="store_true"
|
||||||
)
|
)
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'--agents', '-a', choices=['trip_planner', 'ApiKeys'],
|
'--agents', '-a',
|
||||||
help="Use an AI agent to help you with a task. Acceptable values are 'trip_planner' or 'ApiKeys'. This option cannot be used with any other flag."
|
help="Use praisonAI to create an AI agent and then use it. ex: 'write me a movie script'", action="store_true"
|
||||||
)
|
)
|
||||||
|
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
@ -89,17 +89,6 @@ def main():
|
|||||||
if args.changeDefaultModel:
|
if args.changeDefaultModel:
|
||||||
Setup().default_model(args.changeDefaultModel)
|
Setup().default_model(args.changeDefaultModel)
|
||||||
sys.exit()
|
sys.exit()
|
||||||
if args.agents:
|
|
||||||
# Handle the agents logic
|
|
||||||
if args.agents == 'trip_planner':
|
|
||||||
from .agents.trip_planner.main import planner_cli
|
|
||||||
tripcrew = planner_cli()
|
|
||||||
tripcrew.ask()
|
|
||||||
sys.exit()
|
|
||||||
elif args.agents == 'ApiKeys':
|
|
||||||
from .utils import AgentSetup
|
|
||||||
AgentSetup().run()
|
|
||||||
sys.exit()
|
|
||||||
if args.gui:
|
if args.gui:
|
||||||
run_electron_app()
|
run_electron_app()
|
||||||
sys.exit()
|
sys.exit()
|
||||||
@ -111,6 +100,18 @@ def main():
|
|||||||
if not os.path.exists(os.path.join(config, "context.md")):
|
if not os.path.exists(os.path.join(config, "context.md")):
|
||||||
print("Please create a context.md file in ~/.config/fabric")
|
print("Please create a context.md file in ~/.config/fabric")
|
||||||
sys.exit()
|
sys.exit()
|
||||||
|
if args.agents:
|
||||||
|
standalone = Standalone(args)
|
||||||
|
text = "" # Initialize text variable
|
||||||
|
# Check if an argument was provided to --agents
|
||||||
|
if args.text:
|
||||||
|
text = args.text
|
||||||
|
else:
|
||||||
|
text = standalone.get_cli_input()
|
||||||
|
if text:
|
||||||
|
standalone = Standalone(args)
|
||||||
|
standalone.agents(text)
|
||||||
|
sys.exit()
|
||||||
standalone = Standalone(args, args.pattern)
|
standalone = Standalone(args, args.pattern)
|
||||||
if args.list:
|
if args.list:
|
||||||
try:
|
try:
|
||||||
|
@ -55,6 +55,9 @@ class Standalone:
|
|||||||
self.model = 'gpt-4-turbo-preview'
|
self.model = 'gpt-4-turbo-preview'
|
||||||
self.claude = False
|
self.claude = False
|
||||||
sorted_gpt_models, ollamaList, claudeList = self.fetch_available_models()
|
sorted_gpt_models, ollamaList, claudeList = self.fetch_available_models()
|
||||||
|
self.sorted_gpt_models = sorted_gpt_models
|
||||||
|
self.ollamaList = ollamaList
|
||||||
|
self.claudeList = claudeList
|
||||||
self.local = self.model in ollamaList
|
self.local = self.model in ollamaList
|
||||||
self.claude = self.model in claudeList
|
self.claude = self.model in claudeList
|
||||||
|
|
||||||
@ -333,6 +336,23 @@ class Standalone:
|
|||||||
else:
|
else:
|
||||||
return sys.stdin.read()
|
return sys.stdin.read()
|
||||||
|
|
||||||
|
def agents(self, userInput):
|
||||||
|
from praisonai import PraisonAI
|
||||||
|
model = self.model
|
||||||
|
os.environ["OPENAI_MODEL_NAME"] = model
|
||||||
|
if model in self.sorted_gpt_models:
|
||||||
|
os.environ["OPENAI_API_BASE"] = "https://api.openai.com/v1/"
|
||||||
|
elif model in self.ollamaList:
|
||||||
|
os.environ["OPENAI_API_BASE"] = "http://localhost:11434/v1"
|
||||||
|
os.environ["OPENAI_API_KEY"] = "NA"
|
||||||
|
|
||||||
|
elif model in self.claudeList:
|
||||||
|
print("Claude is not supported in this mode")
|
||||||
|
sys.exit()
|
||||||
|
print("Starting PraisonAI...")
|
||||||
|
praison_ai = PraisonAI(auto=userInput, framework="autogen")
|
||||||
|
praison_ai.main()
|
||||||
|
|
||||||
|
|
||||||
class Update:
|
class Update:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
|
4174
poetry.lock
generated
4174
poetry.lock
generated
File diff suppressed because it is too large
Load Diff
@ -12,12 +12,12 @@ packages = [
|
|||||||
]
|
]
|
||||||
|
|
||||||
[tool.poetry.dependencies]
|
[tool.poetry.dependencies]
|
||||||
python = "^3.10"
|
python = ">=3.10,<3.13"
|
||||||
crewai = "^0.11.0"
|
crewai = "^0.22.5"
|
||||||
unstructured = "0.10.25"
|
unstructured = "0.10.25"
|
||||||
pyowm = "3.3.0"
|
pyowm = "3.3.0"
|
||||||
tools = "^0.1.9"
|
tools = "^0.1.9"
|
||||||
langchain-community = "^0.0.24"
|
langchain-community = "^0.0.25"
|
||||||
google-api-python-client = "^2.120.0"
|
google-api-python-client = "^2.120.0"
|
||||||
isodate = "^0.6.1"
|
isodate = "^0.6.1"
|
||||||
youtube-transcript-api = "^0.6.2"
|
youtube-transcript-api = "^0.6.2"
|
||||||
@ -25,10 +25,11 @@ pydub = "^0.25.1"
|
|||||||
ollama = "^0.1.7"
|
ollama = "^0.1.7"
|
||||||
anthropic = "^0.18.1"
|
anthropic = "^0.18.1"
|
||||||
pyperclip = "^1.8.2"
|
pyperclip = "^1.8.2"
|
||||||
python-dotenv = "^1.0.1"
|
python-dotenv = "1.0.0"
|
||||||
jwt = "^1.3.1"
|
jwt = "^1.3.1"
|
||||||
flask = "^3.0.2"
|
flask = "^3.0.2"
|
||||||
helpers = "^0.2.0"
|
helpers = "^0.2.0"
|
||||||
|
praisonai = "^0.0.18"
|
||||||
|
|
||||||
[tool.poetry.group.cli.dependencies]
|
[tool.poetry.group.cli.dependencies]
|
||||||
pyyaml = "^6.0.1"
|
pyyaml = "^6.0.1"
|
||||||
@ -39,7 +40,7 @@ flask = "^3.0.2"
|
|||||||
flask-sqlalchemy = "^3.1.1"
|
flask-sqlalchemy = "^3.1.1"
|
||||||
flask-login = "^0.6.3"
|
flask-login = "^0.6.3"
|
||||||
flask-jwt-extended = "^4.6.0"
|
flask-jwt-extended = "^4.6.0"
|
||||||
python-dotenv = "^1.0.1"
|
python-dotenv = "1.0.0"
|
||||||
openai = "^1.11.0"
|
openai = "^1.11.0"
|
||||||
flask-socketio = "^5.3.6"
|
flask-socketio = "^5.3.6"
|
||||||
flask-sock = "^0.7.0"
|
flask-sock = "^0.7.0"
|
||||||
@ -52,7 +53,7 @@ tqdm = "^4.66.1"
|
|||||||
requests = "^2.31.0"
|
requests = "^2.31.0"
|
||||||
openai = "^1.12.0"
|
openai = "^1.12.0"
|
||||||
flask = "^3.0.2"
|
flask = "^3.0.2"
|
||||||
python-dotenv = "^1.0.1"
|
python-dotenv = "1.0.0"
|
||||||
jwt = "^1.3.1"
|
jwt = "^1.3.1"
|
||||||
|
|
||||||
|
|
||||||
|
44
test.yaml
Normal file
44
test.yaml
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
framework: crewai
|
||||||
|
topic: 'write me a 20 word essay on apples
|
||||||
|
|
||||||
|
'
|
||||||
|
roles:
|
||||||
|
researcher:
|
||||||
|
backstory: Has an extensive background in conducting research using digital tools
|
||||||
|
to extract relevant information.
|
||||||
|
goal: Gather comprehensive information about apples
|
||||||
|
role: Researcher
|
||||||
|
tasks:
|
||||||
|
collect_information_on_apples:
|
||||||
|
description: Use digital tools to find credible sources of information on
|
||||||
|
apples covering history, types, and benefits.
|
||||||
|
expected_output: Collected data on apples, including historical background,
|
||||||
|
varieties, and health benefits.
|
||||||
|
tools:
|
||||||
|
- ''
|
||||||
|
analyst:
|
||||||
|
backstory: Expert in analyzing large volumes of data to identify the most relevant
|
||||||
|
and interesting facts.
|
||||||
|
goal: Analyze gathered information to distill key points
|
||||||
|
role: Analyst
|
||||||
|
tasks:
|
||||||
|
synthesize_information:
|
||||||
|
description: Review the collected data and extract the most pertinent facts
|
||||||
|
about apples, focusing on uniqueness and impact.
|
||||||
|
expected_output: A summary highlighting key facts about apples, such as nutritional
|
||||||
|
benefits, global popularity, and cultural significance.
|
||||||
|
tools:
|
||||||
|
- ''
|
||||||
|
writer:
|
||||||
|
backstory: Specializes in creating short, impactful pieces of writing that capture
|
||||||
|
the essence of the subject matter.
|
||||||
|
goal: Craft a concise and engaging essay on apples
|
||||||
|
role: Writer
|
||||||
|
tasks:
|
||||||
|
write_essay:
|
||||||
|
description: Based on the analyzed data, write a compelling 20-word essay
|
||||||
|
on apples that encapsulates their essence and significance.
|
||||||
|
expected_output: An engaging 20-word essay on apples.
|
||||||
|
tools:
|
||||||
|
- ''
|
||||||
|
dependencies: []
|
Loading…
x
Reference in New Issue
Block a user