Onboarding AIURM
Learn how to use the AIURM protocol with step-by-step practical examples.
HR Analysis with Simulated Data
This test uses simulated data but reflects a real-world HR workflow.
Although this type of workflow is typically designed for integration environments between systems and AIs, here we will use the AI chat interface as an accessible and didactic way to experience the protocol in practice.
You can complete the test partially, according to your interest or objectives — but always follow the sequence of steps up to your chosen point.
This ensures that all referenced markers are correctly defined and reusable.
Quick Start (Copy & Paste Each Step)
Simply follow the step-by-step instructions, copying and pasting each block into the chat interface of your preferred AI.
Step 1: Activate AIURM
Before getting started, you need to activate the protocol in the AI.
Refer to the Activation page for detailed instructions on how to do this.
Step 2: Load Your Employee Data
The sample data below is simulated and provided in JSON format, but you can replace it with any well-structured data set, such as tables, lists, or records.
The important thing is to keep the fields consistent with those expected by the logic that will be applied in the next steps.
[
{
"id": "E001",
"name": "Ana Silva",
"position": "Senior Analyst",
"department": "IT",
"salary": 8500.00,
"tenure_months": 36,
"performance_ratings": [8.5, 9.0, 8.8],
"absences": 2,
"overtime_hours": 45,
"certifications": 3
},
{
"id": "E002",
"name": "Carlos Santos",
"position": "Manager",
"department": "Sales",
"salary": 12000.00,
"tenure_months": 48,
"performance_ratings": [9.2, 9.1, 9.5],
"absences": 1,
"overtime_hours": 60,
"certifications": 2
},
{
"id": "E003",
"name": "Maria Costa",
"position": "Junior Analyst",
"department": "HR",
"salary": 4500.00,
"tenure_months": 12,
"performance_ratings": [7.5, 8.0, 7.8],
"absences": 5,
"overtime_hours": 20,
"certifications": 1
},
{
"id": "E004",
"name": "John Smith",
"position": "Developer",
"department": "IT",
"salary": 7200.00,
"tenure_months": 24,
"performance_ratings": [8.8, 8.5, 9.0],
"absences": 3,
"overtime_hours": 80,
"certifications": 4
},
{
"id": "E005",
"name": "Paula Johnson",
"position": "Coordinator",
"department": "Marketing",
"salary": 9500.00,
"tenure_months": 60,
"performance_ratings": [9.0, 8.9, 9.2],
"absences": 0,
"overtime_hours": 35,
"certifications": 2
}
] [*employees] #0
Step 3: Define Performance Rules
Analyze performance based on:
- Average rating >= 9.0 = "Excellent"
- Average rating >= 8.5 = "Very Good"
- Average rating >= 8.0 = "Good"
- Average rating < 8.0 = "Needs Improvement"
Classify engagement:
- Absences <= 1 AND overtime_hours >= 40 = "High Engagement"
- Absences <= 3 AND overtime_hours >= 20 = "Normal Engagement"
- Absences > 3 OR overtime_hours < 20 = "Low Engagement"
Growth potential:
- Certifications >= 3 AND tenure_months >= 24 = "Ready for Promotion"
- Certifications >= 2 AND tenure_months >= 12 = "Development Track"
- Other cases = "Entry Level"
[*performance_logic] #0
Step 4: Apply Performance Logic
Apply *performance_logic to *employees [*performance_analysis] #0
You can now view the detailed analysis results or export them in JSON format for external use.
*performance_analysis #3
show only *employees with Ready for Promotion in *performance_analysis
show *employees with *performance_analysis.growth_potential = "Ready for Promotion" #1
explain reasoning for each result in *performance_analysis
show full *performance_analysis dependency tree
generate json *performance_analysis
Step 5: Define Retention Risk Rules
Analyze retention risk based on:
- Salary below 7000 AND rating >= 8.5 = "High Risk"
- Tenure > 48 months AND non-management position = "Medium Risk"
- Overtime > 60 hours AND absences > 2 = "Burnout Risk"
- Other cases = "Low Risk"
Recommended actions:
- High Risk: "Urgent salary review"
- Medium Risk: "Career development plan"
- Burnout Risk: "Workload redistribution"
- Low Risk: "Continue monitoring"
[*retention_logic] #0
Step 6: Apply Retention Logic
Apply *retention_logic to *employees [*retention_analysis] #0
Step 7: Generate Department Summary
Create department summary based on *performance_analysis and *retention_analysis:
- Count by performance classification
- Average salary by department
- Retention risk distribution
- Top performers identified
- Priority actions by area
[*department_summary] #0
*department_summary
*department_summary #3
Quick Queries (Use Anytime)
See Specific Results
Step 8: View high performers only
Show employees classified as "Excellent" in *performance_analysis
Step 9: View retention risks
Show employees with "High Risk" in *retention_analysis
Step 10: Focus on IT department
Extract only "IT" department data from *department_summary
Extract only "IT" department data from *department_summary #3
Step 11: Get brief summary
*performance_analysis #1
Step 12: Get detailed analysis
*performance_analysis #3
Step 13: Generate Action Plans
Based on *performance_analysis and *retention_analysis, create action plan with:
1. Immediate priorities (30 days)
2. Medium-term actions (90 days)
3. Long-term strategies (12 months)
4. Estimated budget
5. Action owners
[*hr_action_plan]
Step 14: Create Executive Dashboard
Combine *performance_analysis + *retention_analysis + *department_summary into executive dashboard with:
- Key performance indicators
- Critical alerts
- Top 5 priorities
- Required budget
[*executive_dashboard]
Advanced: Scenario Testing
Compare Different Promotion Criteria
Step 15: Conservative approach
Promotion only with rating >= 9.0 AND tenure >= 36 months AND certifications >= 3
[*conservative_criteria] #0
Step 16:Aggressive approach
Promotion with rating >= 8.5 AND tenure >= 18 months AND certifications >= 2
[*aggressive_criteria] #0
Step 17: Apply both
Apply *conservative_criteria to *employees [*conservative_promotions] #0
Apply *aggressive_criteria to *employees [*aggressive_promotions] #0
*aggressive_promotions as json
Step 18: Compare results
Compare *conservative_promotions with *aggressive_promotions showing:
- Number of promotions each scenario
- Estimated financial impact
- Risks and benefits
- Final recommendation
[*scenario_comparison]
Step 19: Audit and Traceability
explain reasoning for each in *aggressive_promotions
why was John Smith included in *aggressive_promotions
explain reasoning for each result in *aggressive_promotions
explain reasoning for each result in *scenario_comparison
compare *employees, *performance_analysis, and *conservative_promotions highlighting transformation steps
trace logic from *employees to *aggressive_promotions as tree #3
show full *aggressive_promotions dependency tree
show full marker dependency tree Natural Format
show full marker dependency tree ASCII Format
Marker Dependency Visualization: Advanced (Graphviz/DOT)
generate full marker dependency tree in Graphviz/DOT format
generate marker dependency subtree for *performance_analysis in Graphviz/DOT format
generate hierarchical marker dependency tree in Graphviz/DOT format
generate marker dependency tree clustered by marker type in Graphviz/DOT format
generate marker dependency tree in Graphviz/DOT format including metadata (type, timestamp, owner)
Generate full marker dependency tree in Graphviz/DOT format with grouped clusters by marker type, including metadata and color coding [*full_dependency_dot_clustered_metadata_color]
Use suffixes like #1, #2 or #3 to control the level of detail in the response.
Visualizing the Marker Dependency Tree
You can generate a complete visualization of the relationships between markers using Graphviz/DOT. This makes it easier to understand, audit, and document structured workflows with AIURM.
1. Copy the generated DOT code.
It will look similar to this:
digraph FullMarkerDependencyTree {
rankdir=TB;
node [shape=box, style=filled, color=lightsteelblue];
employees [label="*employees"];
performance_logic [label="*performance_logic"];
// ... (other nodes and relationships)
performance_analysis -> executive_dashboard;
}
2. Paste the code into an online Graphviz/DOT visualizer, such as:
https://dreampuf.github.io/GraphvizOnline/
Key Benefits
- Reusable logic: same rules can be applied to different datasets.
- Full traceability: each result is linked to its source data and logic.
- Scenario testing: allows easy comparison of different approaches.
- Modular analysis: components can be mixed and matched freely.
- No setup required: can be used with any system connected to an AI, whether through chat interfaces, enterprise integrations, or direct API calls.
- More efficient inference: Markers eliminate ambiguities, reducing the inference effort required by the AI, with potential implications for performance, consistency, and future applications.
- Native auditability: Each result maintains an explicit link to its source data and logic, enabling full traceability.
- DLR Pattern: *data + *logic=[*result].