CyborgShell projects are JSON files that define transformer pipelines - directed acyclic graphs (DAGs) where files are nodes and transformers are edges. When a source file becomes "dirty" (modified), all linked transformers execute automatically in cascade.
{
  "projectName": "my-project",
  "files": [
    {
      "fn": "filename.txt",           // Filename
      "mt": "text/plain",             // MIME type
      "mte": "text",                  // Editor type (text/hex)
      "ln": 1,                        // Linked to file index (null if none)
      "pl": "transformer.xfrm",       // Plugin/transformer name
      "arg": "arguments here",        // Transformer arguments
      "fl": "",                       // File flavour
      "st": ""                        // Status
    }
  ]
}
  - Services: openai, claude, gemini, or %PROVIDER% for configured default
- Sessions maintain context across calls
  - Scan mode: looks for AI PROMPT START/END blocks in source
  - Commands: clear, save, load, train
- Use for managing ChatGPT sessions in pipelines
- Train with files or other sessions
- See Session Management Guide for details
Simple A → B → C flow:
File 1: input.txt (ln: null)
File 2: processed.txt (ln: 1, pl: chatgpt.xfrm)
File 3: output.txt (ln: 2, pl: translate.xfrm)
One source, multiple outputs:
File 1: source.txt (ln: null)
File 2: english.txt (ln: 1, pl: passthrough.xfrm)
File 3: japanese.txt (ln: 1, pl: translate.xfrm, arg: "Japanese")
File 4: chinese.txt (ln: 1, pl: translate.xfrm, arg: "Chinese")
Multiple sources, one output:
File 1: data1.csv (ln: null)
File 2: data2.csv (ln: null)
File 3: data3.csv (ln: null)
File 4: merged.csv (ln: "1,2,3", pl: csvmerge.xfrm)
Conditional processing:
File 1: content.txt (ln: null)
File 2: analysis.txt (ln: 1, pl: chatgpt.xfrm, arg: "openai analyze quality 1-10")
File 3: gate.txt (ln: 2, pl: chatgpt.xfrm, arg: "openai if score >= 8 output YES else NO")
File 4: final.txt (ln: 3, pl: blocker.xfrm)
Complex DAG with multiple paths:
File 1: input → File 2: process1 → File 5: merge
                                   ↗
File 3: input → File 4: process2 ↗
Preserve original while processing:
File 1: working.js (ln: 2, pl: passthrough.xfrm)
File 2: ai-processor.js (ln: 1, pl: chatgpt.xfrm)
File 3: backup.js (ln: 1, pl: passthrough.xfrm)
AI-assisted code generation with automatic backup.
Flow:
#1 coder.js (write code/prompts) → passthrough → #3 coderbackup.js (safety backup)
    ↓
chatgpt (AI processes)
    ↓ 
#2 coderai.js
    ↓
passthrough → back to coder.js
Usage:
Generate content and translate to multiple languages simultaneously.
Flow:
#1 aitest_prompt.txt (user prompt)
    ↓
chatgpt
    ↓
#2 aitest_story.txt (AI-generated story)
    ↓ 
blocker (quality check)
    ↓ 
#3 aitest_english.txt (validated content)
    ├→ translate → #4 aitest_chinese.txt
    ├→ translate → #5 aitest_indonesian.txt
    ├→ translate → #6 aitest_japanese.txt
    └→ translate → #7 aitest_tagalog.txt
Multi-stage research workflow with quality gate.
Flow:
#1 data-input (user request)
    ↓
chatgpt
    ↓
#2 raw-dataset (AI generated data)
    ├→ basic-statistics ────┐
    ├→ outlier-analysis     │
    ├→ frequency-distribution│
    └→ regression-analysis ─┼→ latex-statistics → # comprehensive-report
                            │                              ↓
                            └→ # JS-code                passthrough
                                                           ↓
                                                    # quality-assessment
                                                           ↓
                                                      quality-gate
                                                           ↓
                                                    # final-publication
Quality Gate: Report only publishes if AI self-assessment scores ≥8/10.
Before creating links, set up your file structure:
newfile 10              # Creates files 1-10
files
file 1               # Go to file 1 if not there already
This creates 10 empty working files. Transformers will automatically load from file 11 onward when referenced.
Mental Model:
project save myproject     # Saves as myproject.prj
project load myproject     # Loads myproject.prj
Note: .prj extension is automatic - don't include it in the name.
# Setup
newfile 3
files
file 1
# Name files
filename coder.js
file 2
filename coderai.js
file 3
filename coderbackup.js
file 1
files
# Create links
link 2 1 chatgpt %PROVIDER% process the file prompts start with # or // #
link 3 1 passthrough
link 1 2 passthrough
files
# Save
project save aicoder
# Setup files
newfile 7
files
file 1
# Name them
filename aitest_prompt.txt
file 2
filename aitest_story.txt
file 3
filename aitest_english.txt
file 4
filename aitest_chinese.txt
file 5
filename aitest_japanese.txt
file 6
filename aitest_indonesian.txt
file 7
filename aitest_tagalog.txt
file 1
files
# Create links
link 2 1 chatgpt %PROVIDER%
link 3 2 blocker
link 4 3 translate Chinese
link 5 3 translate Japanese
link 6 3 translate Indonesian
link 7 3 translate Tagalog
files
# Save
project save aitranslate
# Setup
newfile 5
files
file 1
# Name files
filename source-code.js
file 2
filename code-analysis.txt
file 3
filename suggestions.txt
file 4
filename test-cases.js
file 5
filename review-report.md
file 1
files
# Create links
link 2 1 chatgpt openai analyze this code for bugs, security issues, and code smells
link 3 1 chatgpt openai suggest improvements for readability, performance, and maintainability
link 4 1 chatgpt openai generate comprehensive unit tests using pytest
link 5 2,3,4 chatgpt openai compile a comprehensive code review report with the analysis, suggestions, and test coverage
files
# Save
project save code-review
# Setup
newfile 7
files
file 1
# Name files
filename version-a.txt
file 2
filename version-b.txt
file 3
filename version-c.txt
file 4
filename diff-a-b.txt
file 5
filename diff-b-c.txt
file 6
filename merged-all.txt
file 7
filename conflict-resolution.txt
# Create links (multi-input)
file 4
link 1,2 4 filediff
file 5
link 2,3 5 filediff
file 6
link 1,2,3 6 filejoin %NL%%NL%--- NEXT VERSION ---%NL%%NL%
file 7
link 4,5 7 chatgpt openai analyze these diffs and suggest how to resolve conflicts, then create a merged version
# Save
project save document-merge
# Setup
newfile 7
files
file 1
# Name files
filename raw-data.csv
file 2
filename validation-report.txt
file 3
filename stats-summary.txt
file 4
filename cleaned-data.csv
file 5
filename quality-check.txt
file 6
filename approved-data.csv
file 7
filename final-report.md
# Create pipeline with quality gate
file 2
link 1 2 chatgpt openai analyze this CSV for data quality issues: missing values, duplicates, outliers, format errors
file 3
link 1 3 filestats
file 4
link 1 4 chatgpt openai clean the data: remove duplicates, fill missing values appropriately, fix formatting
file 5
link 4 5 chatgpt openai verify the cleaned data quality and assign a score 1-10
file 6
link 5 6 blocker
file 7
link 2,3,5 7 chatgpt openai create a data quality report with validation results, statistics, and final quality score
# Save
project save data-validation
# File Management
file N                    # Create N files, go to file N
file X                    # Switch to file X
files                     # List all files (shows D for dirty)
filename name.ext         # Name current file
# Linking
link SOURCE TARGET plugin args       # Single input
link SRC1,SRC2 TARGET plugin args    # Multi-input (NO SPACES!)
# Projects
project save name         # Save as name.prj
project load name         # Load name.prj
project name "My Project" # Set project display name
# Saving
saveall                   # Save all dirty (D) files at once
# Viewing
list                      # Show current file content
type filename             # View file without loading
When transformers process files, they're marked as D (dirty):
Files:
  1 *D input.txt 50 bytes/5 lines
  2  D output.txt 120 bytes/10 lines
* current file, D dirty file
Workflow:
This means after processing completes, one saveall command saves all results!
Don't forget to add transformer files to your project:
{
  "fn": "chatgpt.xfrm",
  "mt": "text/javascript transformer",
  "mte": "text",
  "ln": null,
  "pl": "",
  "arg": "",
  "fl": "",
  "st": ""
}
Use the %PROVIDER% variable if you only have a single provider:
"arg": "%PROVIDER% translate this text"
Use clear, hierarchical names:
Use ChatGPT sessions to maintain conversation context:
"arg": "openai session: analysis calculate statistics"
Later:
"arg": "openai session: analysis what was the mean?"
Always preserve source material:
source → processor (ln: backup)
      → backup (ln: source, pl: passthrough.xfrm)
Add blockers after quality checks:
content → validator → blocker → publication
Process once, output many ways:
source → translate (Japanese)
      → translate (Chinese)
      → translate (Spanish)
      → speak (en-us)
1. data-input.txt
2. analysis.txt (ln: 1, chatgpt: "analyze data")
3. visualization.txt (ln: 2, chatgpt: "create R code")
4. report.txt (ln: "2,3", chatgpt: "write report")
1. brief.txt
2. draft.txt (ln: 1, chatgpt: "write article")
3. edited.txt (ln: 2, chatgpt: "improve clarity")
4. final.txt (ln: 3, chatgpt: "format for publication")
1. raw1.csv
2. raw2.csv
3. raw3.csv
4. merged.csv (ln: "1,2,3", csvmerge)
5. stats.txt (ln: 4, filestats)
6. analysis.txt (ln: 5, chatgpt: "interpret statistics")
1. requirements.txt
2. code.js (ln: 1, chatgpt: "generate JS code")
3. tests.js (ln: 2, chatgpt: "create unit tests")
4. docs.md (ln: 2, chatgpt: "document code")
"arg": "%PROVIDER% process this"
Configure provider using csconfig
Use persistent sessions for stateful processing:
"arg": "openai session: validator validate input then remember result"
File 1: template.html (HTML with {{placeholders}})
File 2: data.json (JSON dictionary)
File 3: output.html (ln: "1,2", template.xfrm)
draft → improve1 → improve2 → improve3 → final
  ↓       ↓          ↓          ↓          ↓
backup1 backup2  backup3   backup4   backup5
Bad:  "openai process this"
Good: "openai extract key insights and summarize in 3 bullet points"
"openai first validate the data structure, then calculate statistics"
"openai use the data from the previous analysis to generate visualizations"
File 1 → "openai analyze problem"
File 2 → "openai propose solutions based on analysis"
File 3 → "openai select best solution and explain"
The session.xfrm transformer allows you to manage ChatGPT sessions within pipelines:
# Load pre-trained session
link 2 1 session legal: load
link 3 2 chatgpt openai session: legal review contract
# Train session with input
link 2 1 session research: train
link 3 2 chatgpt openai session: research analyze
# Clear session before use
link 2 1 session project: clear
link 3 2 chatgpt openai session: project start fresh
# Save session after processing
link 2 1 chatgpt openai session: work process this
link 3 2 session work: save
# Train from files
link 2 1 session api: train swagger.json,examples.txt
# Train from other sessions
link 2 1 session fullstack: train frontend:,backend:,database:
# Train from input sources
link 2 1 session docs:
# Automatically trains session 'docs' with file 1's content
# Share context across multiple links
link 2 1 chatgpt openai session: analysis examine data
link 3 2 chatgpt openai session: analysis find patterns
link 4 3 chatgpt openai session: analysis synthesize findings
# All three links share the same session context
For comprehensive session management documentation, see the Session Management Guide.
Automated code analysis with parallel review tracks.
Flow:
source-code.js
    ├→ code-analysis.txt (bugs, security)
    ├→ suggestions.txt (improvements)
    └→ test-cases.js (unit tests)
         ↓
    review-report.md (combines all three)
Three-way diff and intelligent conflict resolution.
Flow:
version-a.txt ────┐
                  ├→ diff-a-b.txt ──┐
version-b.txt ────┤                 ├→ conflict-resolution.txt
                  ├→ diff-b-c.txt ──┘
version-c.txt ────┘
Single brief → multiple platform-optimized outputs.
Flow:
content-brief.txt
    ├→ blog-post.md
    ├→ twitter-thread.txt
    ├→ linkedin-post.txt
    ├→ email-newsletter.html
    ├→ video-script.txt
    └→ infographic-data.json
Quality-gated data cleaning workflow.
Flow:
raw-data.csv
    ├→ validation-report.txt
    ├→ stats-summary.txt
    └→ cleaned-data.csv → quality-check.txt → blocker → approved-data.csv
                                                              ↓
                                                    final-report.md
Generate complete API docs from endpoint definitions.
Flow:
api-endpoints.json
    ↓
openapi-spec.json
    ├→ markdown-docs.md ──────┐
    ├→ postman-collection.json│
    ├→ client-sdk-csharp.cs ───┼→ integration-guide.md
    └→ client-sdk-javascript.js┘
Comprehensive educational content from a single topic.
Flow:
topic.txt
    ├→ lesson-plan.md ────────┐
    ├→ study-notes.md ────────┤
    ├→ practice-exercises.txt ├→ teacher-guide.md
    ├→ quiz.json ─────────────┤
    └→ visual-aids.txt ───────┘
Multi-stage creative writing with revision cycle.
Flow:
story-premise.txt
    ├→ character-profiles.md ──┐
    ├→ world-building.md ──────┼→ plot-outline.md
    └────────────────────────┘      ↓
                            chapter-1, 2, 3 drafts
                                     ↓
                              full-draft.txt
                                     ↓
                            editorial-notes.txt
                                     ↓
                            revised-draft.txt
                                     ↓
                                synopsis.txt
Data-driven HTML reports with AI insights.
Flow:
report-template.html ──────┐
                           │
company-data.json ────┐    │
financial-data.json ──┼→ merged-data.json ──┼→ generated-report.html ──┐
performance-data.json ┘         ↓                                        │
                          insights.txt ──────────────────────────────────┤
                                                                          ↓
                                                                  final-report.html
CyborgShell projects enable powerful AI-driven workflows through declarative pipeline definitions. Master these patterns to build sophisticated automation ranging from simple translations to complex research pipelines with quality gates and multi-stage processing.
Remember: Files are nodes, transformers are edges, and dirty flags trigger cascading updates through your DAG. Build wisely! 🚀