Another Powerful Example of FluentCLI Capabilities MetaPrompting
Yesterday, I wrote an article about using FluentCLI to write a script to use FluentCLI to write a program and to evaluate and test that program. It was done using one monster instruction prompt, which included 90% of details about using and invoking FluentCLI and maybe ten actual requests.
What's so amazing about using FluentCLI is that you have full access to the shell and how it functions and can give this power to AI systems once they understand the parameters of invocation.
I wanted to prove to myself that it wasn't a fluke, so I tried again but took the next step in creating a scriptable process. It's a simple variable injection, just a tiny step, but something I mentioned being able to do in the previous article on the fabric project.
This was the previous mega-meta prompt; with one addition, about 3/4 of the way down, you will notice this line '$prompt_instructions'. I will get to how this is used in a couple of paragraphs.
eval "$( fluent GPT4oToolAgentRepoCloud "
you are an AI assistant and you can issue macos commands by running the following command to ask an AI to generate a command
\"eval \$\(fluent \<flowname\> \'request/to/send/ai/that/generates/a/command\')\".
The -p flag tells the language model to parse for code blocks inside a \`\`\` code \`\`\` markdown syntax block. You use this flag if you request that the AI respond with code of some sort. For example: \"eval \$\(fluent \<flowname\> \'write a python function to output the first 1000000 entries in the fibannacci sequence\' -p)\".
If you wanted to output that to a file you would issue the command, \"eval \$\(fluent \<flowname\> \'write a python function to output the first 1000000 entries in the fibannacci sequence\' -p > /location/of/the/file)\".
If you want to read a file into the command you would run, \"eval \$\(cat path/to/file | fluent \<flowname\> \'write a python function to output the first 1000000 entries in the fibannacci sequence\' -p)\".
You can issue any command generated from the AI system in this way, including the command structure, pipelines, and files you want.
You can pipeline fluent commands to other fluent commands, for example:
\"eval \$\(fluent \<flowname\> \'write a python function to output the first 1000000 entries in the fibannacci sequence\' -p | fluent \<flowname\> \'write a python function to process a list of numbers looking for primes\')\".
The potential flownames are LocalGoogleGeminiChain, LocalGoogleFlashChain, GroqLLama370b8192AgentRepoCloud, GroqLLama38bToolAgentRepoCloud, GroqGemma7bAgentRepoCloud, HaikuChain, SonnetChain, OpusChain.
Additionally you can create images with the flownames MakeLeonardoImagePost, MakeDalleImagePost they will respond with a markdown based link. Send them to a text file, do not try and deal with the image just work with the link.
And you can use another flow to generate the prompt for the image, for example: \"eval \$\(cat out/of/previous/file | fluent LocalGoogleGeminiChain \'create a creative prompt to generate an image for this content\' -p | fluent MakeDalleImagePost 'create this image:' > ~/Downloads/image.md)\".
You can create google docs with the flowname, MakeGoogleDocsCreator, by piping using pandoc to convert markdown to standalone html5 and sending it to the flow stdin pipeline.
Do not use the -p flag with the image or googledoc flows, it is only for when you expect to need to parse a response for code block output, do not use when generating the content and images of the content will be parsed out.
For example, this is wrong: eval \"\$\(fluent LocalGoogleGeminiChain \'Write an introduction for a blog post about unicorns and elves\' \-p \> ~/Downloads/introduction.md\) because it's a fluent command that is asking for content to be generated.
This is correct: eval \"\$\(fluent LocalGoogleGeminiChain \'Write an python program to make annagrams from the bill of rights\' \-p \> ~/Downloads/test.py)\", because it is asking for something that will result in a block of code.
---
$prompt_instructions
---
Remember you can read and write files to the filesystem but all commands.
You can direct the ouput of the flows to files with \> ~/Downloads/filename.extension. making the full command structure to output files, \"eval \$\(fluent \<flowname\> \'request/to/send/ai/that/generates/a/command\' \-p\ > ~/Downloads/filename.extension)\.
If you send content into a fluent flow always do it through the piepline as the example above describes. Only use the -p flag with fluent if you expect to issue a system command from the response, otherwise just the eval fluent <flowname> \'request\' is sufficient. Ensure each command is syntatically correct so it can run without error. You have no limits to what you can create as long as you use this structure. Be sure to prompt the AIs in the best way to achieve the results.
" -p | subl --)"
So, before I ran that prompt, I assigned a value to prompt_instructions for the game I wanted to create. I've tried this multiple times with multiple different concepts.
The resulting games aren't amazing, but the meta prompt has worked in creating multiple scripts to create and evaluate multiple types of games on a rudimentary level.
This is a short loom on a supposed Excitebike and Space Invaders mashup.
Here is a loom of Rogue-like game that was generated.
Variable Replacement in the Prompts.
Since FluentCLI works in the shell environment, everything in a shell environment is available as a tool. This means we can easily set up scripts with simple or complex variable replacements in the prompt.
The prompt template I show in the first code section of this article is the mega prompt I wrote to instruct the AIs on how to work with FluentCLI. It sets the variable $prompt_instructions, which is replaced by the value I assign before running the prompt.
As mentioned above, I created multiple games by changing the value of the prompt_instructions variable. This is nothing fancy; it's just using Linux variable expansion inside a string.
The text below is the prompt instruction for creating the Rogue game above. It's relatively straightforward and could continue to be refined and improved. Effectively, the better the software specs you define here are, the better the outcome. To a point, as with all things AI right now, there are diminishing returns as complexity increases and the context in the middle gets lost.
prompt_instructions='Now with this understanding I want you write a script that will create a python based game using the above methodology to create, evaluate, and test it and ensure it works. The game I want is a creative rogue clone. Complete with procedurally generated levels, monsters, and treasure. You can use symbols, ascii, text and emoji for graphics and be fully functional for a desktop environment like macos. There should be a monster generator, a treasure generator, and a map generator so that the levels, monsters, and treasure are all unique each time it runs. The script should continue to loop itself until the game is built and functional and ask for validation from the user. When the game is done create an instruction readme.md also. The game needs to be playable and delivered as a working error free game. Be careful how your redirect output to files when looping so as not to overwrite the files if an error occurs and be aware of the need to append vs overwrite and make sure to ask for the script code that works on zsh and macos'
It generated a beautiful script that uses all Gemini LLMs for the creation. It doesn't need to, it can use any of the Flownames I provide in the primary prompt. In practice, I've only had successful programs built with Gemini and GPT-4o as the primary engines.
eval $(fluent LocalGoogleGeminiChain 'write a python script to set up the basic structure of a rogue-like game. The script should include a Game class with an initialization method that sets up the game board and a run method that starts the game loop.' -p > ~/Downloads/rogue_game.py)
eval $(fluent LocalGoogleGeminiChain 'write a python module to generate a procedurally generated map for a rogue-like game. The map should be represented using ASCII characters, and should include walls, floors, and doors. Make sure to include a function that generates a new map each time it is called.' -p > ~/Downloads/map_generator.py)
eval $(fluent LocalGoogleGeminiChain 'write a python module to generate monsters for a rogue-like game. Each monster should have unique attributes such as name, health, attack power, and defense. Make sure to include a function that generates a new set of monsters each time it is called.' -p > ~/Downloads/monster_generator.py)
eval $(fluent LocalGoogleGeminiChain 'write a python module to generate treasure for a rogue-like game. Each treasure should have unique attributes such as type, value, and rarity. Make sure to include a function that generates a new set of treasure each time it is called.' -p > ~/Downloads/treasure_generator.py)
eval $(cat ~/Downloads/rogue_game.py | fluent LocalGoogleGeminiChain 'modify this script to include the map, monster, and treasure generation. Ensure the game loop is functional and includes player input and interactions with the environment, monsters, and treasures.' -p > ~/Downloads/rogue_game_full.py)
eval $(fluent LocalGoogleGeminiChain 'validate the following python script for a rogue-like game and identify any issues: $(cat ~/Downloads/rogue_game_full.py). If there are issues, provide the corrected script.' -p >
~/Downloads/validated_rogue_game.py)
eval $(fluent LocalGoogleGeminiChain 'write a README.md file with instructions on how to play the generated rogue-like game. Include details on controls, objectives, and any other relevant information.' -p > ~/Downloads/README.md)
while ! python3 ~/Downloads/validated_rogue_game.py; do
echo "Validation failed. Re-generating the game script."
eval $(fluent LocalGoogleGeminiChain 'validate the following python script for a rogue-like game and identify any issues: $(cat ~/Downloads/rogue_game_full.py). If there are issues, provide the corrected script.' -p > ~/Downloads/validated_rogue_game.py)
done
The resulting Rogue game.
And these files. I will not upload or create a repo for these at this point, unless it's requested.
As you can see from the output, the script failed to complete the validation portion and left that code file and the readme blank. I failed to provide clear guidance in the initial prompt that generated the above script. The problem sits in this one line:
eval $(fluent LocalGoogleGeminiChain 'validate the following python script for a rogue-like game and identify any issues: $(cat ~/Downloads/rogue_game_full.py). If there are issues, provide the corrected script.' -p > ~/Downloads/validated_rogue_game.py)
This treats the fluent command like an if/then statement and assumes that the script will be output again if the processing doesn't find a problem. Unless explicitly stated, this is not the case. I must update the core prompt to include some instruction around this concept so it isn't a repeating issue.
Still, in the end, a playable text-based Rogue game was created.
I've run through over 20 different tests and have about a 50-60% success rate on the prompt delivering a runnable program. That's pretty good since I haven't refined the prompting technique or instructions much. This all started as a proof of concept after seeing all the talk about the amazing Microsoft Autogen Studio.
I have something else to point out about this work. I haven't used any chatflows with Python interpreters built into them. I will play with it more this afternoon. I'm not sure if it will improve or diminish the results.
The environments Langflow and FlowiseAI provide for Python execution are sandboxed and quite restrictive. Flowise is more restricted than Langflow, and both are far more limited than letting FluentCLI create and invoke directly in my shell environment.
These are such fun and exciting times. I'm off to play and discover more of FluentCLI's depths and applied usage.
Thanks for reading, and have a beautiful day!