In this tutorial, you will transition from informal reasoning to a structured argumentation model. You will learn how to capture logic using the jPipe syntax and how to render that logic into a justification diagram.
Objectives
By the end of this guide, you will be able to:
- Create a justification model using the
.jdformat. - Understand the relationship between evidence, strategies, and conclusions.
- Export your model to a graphical format (SVG) using the jPipe CLI.
- Activate the diagnostic mode
The Scenario: Deploying a Machine Learning Model
Imagine you are a software engineer at ACME Corp. Your flagship product is a machine learning classifier. The company markets this model as “performant”, but a vague marketing term isn’t enough. You need to provide a structured argument based on evidence.
Defining the Scope
“Performant” can mean many things: low latency, low memory footprint, or high accuracy. To create a clear justification, we must make the implicit explicit. For this scenario, we will define performance specifically as predictive accuracy.
We can break down our informal reasoning as follows:
- The Goal: Prove the model is performant.
- The Logic: If the F1-score is at least 0.85, we consider it performant.
Note on F1-score
In the context of machine learning classification, the F1-score is a critical metric because it provides a balanced measure of a model’s accuracy. While simple accuracy counts how many predictions were correct, the F1-score is the harmonic mean of Precision (how many items identified as positive were actually positive) and Recall (how many actual positives were correctly identified). This is particularly relevant when datasets are imbalanced. For example, a model detecting a rare engine failure might be 99% accurate by simply predicting “no failure” every time, but its F1-score would be 0, revealing its failure to actually perform. However, while a high F1-score is a strong piece of evidence, it is rarely 100% sufficient on its own. It tells you how the model performed on a specific dataset, but it does not account for things like data drift, model robustness against adversarial inputs, or the quality of the training data itself.
Step 1: Mapping the Argumentation Model
Using the core concepts of the jPipe ecosystem (derived from the Toulmin model), we can structure this reasoning:
- Conclusion: The claim we want to prove (“The model is performant”).
- Strategy: The reasoning or “warrant” that connects our data to our conclusion (“The F1-score is >= 0.85”).
- Evidence: The “grounds” or facts that support our strategy (“The model file exists” and “The test dataset is available”).
Step 2: Writing the jPipe Model
Create a new file named initial.jd in your favorite editor (or the jPipe IDE) and enter the following code:
justification performant {
// Our primary claim
conclusion c is "My model is performant"
// The logic connecting evidence to the conclusion
strategy s is "F1-score is greater than 0.85"
s supports c
// The supporting evidence (grounds)
evidence e1 is "The model is available"
e1 supports s
evidence e2 is "The test dataset is available"
e2 supports s
}
Step 3: Exporting to a Graphical Representation
While the textual format is excellent for version control and automation, graphical diagrams are better for reviews and documentation documents.
Using the CLI
To transform your code into a visual diagram, use the jpipe process command. We will specify the input file, the specific model name, the desired format, and the output path.
Run the following command in your terminal:
$ jpipe process -i initial.jd -m performant -f svg -o initial.svg
Command Breakdown
process: activate jPipe to process a model-i initial.jd: The input source file.-m performant: The name of the justification model to export.-f svg: The output format (Scalable Vector Graphics).-o initial.svg: The filename for the output file containing the diagram.
Resulting diagram
The compiler will generate an SVG file visualizing your argument. In this diagram:
- Rectangles represent Conclusions.
- Hexagons represent Strategies.
- Notes represent Evidence.
Step 4: Using the Diagnostic mode
As your models grow in complexity, you may want to peek “under the hood” to see how the compiler interprets your logic. The jpipe compiler includes a diagnostic command designed for this purpose.
Running a diagnostic check helps you verify that all elements are correctly identified, ensures there are no syntax errors, and provides a trace of the compiler’s internal actions.
To run diagnostics on your model, use the following command:
$ jpipe diagnostic -i initial.jd
Understanding the Output
When you run the diagnostic command, you will see a detailed report divided into several key sections:
- Diagnostics: This section lists any warnings or errors. A result of (none) means your model is syntactically sound and logically connected.
- Model Summary: Provides a quick audit of your justification. In our example, it confirms we have one conclusion, one strategy, and two pieces of evidence.
- Symbol Table: This maps your identifiers (like
c,s,e1) to their specific lines and columns in the.jdfile. This is incredibly useful for debugging large files. - Executed Actions: This is the most critical section for power users. It shows the step-by-step instructions the compiler executed to build the model, such as
create_conclusionorsupport.
The output should look similar to this:
=== Diagnostics ===
(none)
=== Action Statistics ===
commands: 8 total (0 macro)
deferrals: 0
=== Model Summary ===
justification "performant"
elements: conclusion(1), strategy(1), evidence(2)
=== Symbol Table ===
justification "performant" [initial.jd:1:14]
c 2:13
s 4:11
e1 7:11
e2 10:11
=== Executed Actions ===
1. create_justification('performant').
2. create_conclusion('performant', 'c', 'My model is performant').
3. create_strategy('performant', 's', 'F1-score is greater than 0.85').
4. support('performant', 'c', 's').
5. create_evidence('performant', 'e1', 'The model is available').
6. support('performant', 's', 'e1').
7. create_evidence('performant', 'e2', 'The test dataset is available').
8. support('performant', 's', 'e2').
Using the diagnostic mode ensures that your structured argumentation is exactly what you intended before you move on to complex operations like CI/CD integration or automated validation.