IFN for Prioritization

A Multi-Criteria Decision Making (MCDM) tool based on Ideal Flow Networks (IFN).

By Kardi Teknomo, PhD
iFN

< Previous | Index | Next >

IFN Prioritization Tutorial

📖 Motivation & Method Comparison

Why use IFN? (Intuition)

The Challenge: How do you choose the "best" project when Option A is cheap but slow, and Option B is fast but expensive?

The IFN Solution: Instead of simple scoring, IFN treats your data as a Flow Network.

  • It balances conflicting goals (Cost vs. Gain) using Normalization.
  • It finds the "Natural Winner" using Probability Flow.
  • It allows you to adjust "Decisiveness" using sensitivity parameters ($\beta, \gamma$).

Why use IFN for Prioritization? (Formal)

Motivation: Decision making is process to choose among alternatives based on multiple criteria. The criterion might have different units (e.g. monetary, time, people). Real-world decisions involve conflicting goals (e.g., High Quality vs. Low Cost). We need a mathematically rigorous way to trade these off.

The Problem: Decision-making is hard when objectives conflict (e.g., a "Fast" car that is "Cheap").

The Solution: IFN converts raw data into a "flow of probability". Unlike methods that just sum up scores, IFN uses non-linear sensitivity to let you control how aggressive the ranking should be.

Comparison with Other Methods

MethodKey CharacteristicBest For ...
IFNData-Driven, Symmetric, Probabilistic FlowUse when you have a data-rich scenarios and want a mathematically robust, tunable ranking without consistency errors.
AHPSubjective Pairwise ComparisonUse AHP when you lack data and rely on expert opinion ($A>B$). Weakness: prone to human inconsistency ($A>B>C>A$). IFN avoids AHP's inconsistency issues.
TOPSISDistance to Ideal SolutionTOPSIS ranks by "distance to ideal". IFN ranks by probability flow. IFN is often more intuitive for "resource allocation" (sum=100%).
Brown-GibsonObjective + Subjective MixSituations where you must explicitly separate hard data (cost) from soft data (design). Brown-Gibson explicitly separates objective metrics from subjective factors. IFN handles both simultaneously if you quantify the soft data such as subjective factors (e.g., Ordinal 1-10).
  • Pros: Mathematically symmetric (Reversible Markov Chain), handles $m \times n$ data natively.
  • Cons: Requires understanding of sensitivity parameters ($\beta, \gamma$).

📺 Video Lecture

Playback error? Watch directly on YouTube.

🧠 Core Concepts: Gain/Cost & Parameters

1. Gain vs. Cost (The Direction of Goodness)

To prioritize correctly, we must tell the math which direction is "better".

  • GAIN (Higher is Better)
    Examples: Revenue, Quality, Durability, Customer Satisfaction.
    Math: We scale the max value to 1.0 (Best).
  • COST (Lower is Better)
    Examples: Price, Time, Risk, Error Rate.
    Math: We invert the scale so the min value becomes 1.0 (Best).
    Formula: $1 - \text{NormalizedScore}$

Normalization

Standardizing data to a $[0, 1]$ range.

Gain: $n_i = \frac{g_i - g_{min}}{g_{max} - g_{min}}$
Cost (Inverted): $n_i = 1 - \frac{c_i - c_{min}}{c_{max} - c_{min}}$

2. The Parameters ($\beta, \gamma$)

These knobs control the "personality" of your ranking.

$\beta$ (Attribute Sensitivity):

Determines "Attribute Importance" based on the distribution of data. How much do we care about differences in attribute scores?

  • $\beta = 2$ (High): Focuses on the "stand-out" features.
  • $\beta > 0$: Niche Finder. Gives more weight to attributes that differentiate the subjects the most.
  • $\beta = 0$: Equality. All attributes are treated as equally important, ignoring data spread.
  • $\beta < 0$: Focuses on attributes with lower aggregate scores (inverts priority).
$\gamma$ (Subject Sensitivity):

How decisively do we want to pick a winner?

  • $\gamma = 2$ (Decisive): Creates a larger gap between the winner and losers.
  • $\gamma = 1$ (Standard): Proportional to utility.
  • $\gamma > 0$: Higher utility = Higher Rank. A high value (e.g., 10) creates a "Winner-Takes-All" outcome.
  • $\gamma = 0$: Uniform. Everyone gets equal priority (Rank 1).
  • $\gamma < 0$ (e.g., -2): Inverse Ranking. Prefer subjects with lower utility. will reverse the list, ranking the "worst" option first. Useful for finding weakest links or "least priority" items.

Relative Power: $\rho_i = \frac{p_i}{\min p_i}$

🧮 Mathematical Steps & Theory

Step-by-Step Algorithm

  1. Normalize: Scale all attributes to $[0,1]$ considering Gain/Cost direction.
  2. Attribute Weight ($\omega_j$): Sum the scores for each attribute and apply Soft-max ($\beta$).
  3. Subject Utility ($u_i$): Calculate the weighted sum for each subject $u_i = \sum \omega_j v_{ij}$.
  4. Priority ($\pi_i$): Convert to Priority via Soft-max. Apply Soft-max ($\gamma$) to utility to get final probability.

IFN Core Theory

1. Normalization: Scale data to $[0,1]$.
2. 1D Stochastic ($S$): Creating a matrix by repeating probability rows ensures the network is "balanced".
3. Reversible Markov Chain ($F$): The Ideal Flow matrix becomes symmetric ($f_{ij} = f_{ji}$).
4. Priority ($\pi$): The sum of any row or column in $F$ equals the subject's priority.

Reversible Markov Chain Proof

IFN creates a 1D Stochastic Matrix $S$ by repeating probability rows. This mathematical structure guarantees that the resulting Ideal Flow Matrix $F$ is symmetric ($f_{ij} = f_{ji}$).

In Physics and Probability theory, a symmetric flow matrix implies the system is a Reversible Markov Chain satisfying the Detailed Balance condition: $\pi_i s_{ij} = \pi_j s_{ji}$.

📊 How to Interpret Results & Charts

1. Trade-off Charts (Radar & Scatter)

Visualizes Normalized Scores (0=Worst, 1=Best).

A. Radar Chart ($n > 2$ Attributes)
  • Outer Edge (1.0): The Best possible score (Highest Gain or Lowest Cost).
  • Center (0.0): The Worst possible score (Lowest Gain or Highest Cost).
  • The Shape: Large area = Good all-rounder (Generalist - Good at everything, but maybe not #1 in anything). Spiky = Specialist (Good at some things, bad at others).
  • Dominance: If A's shape is inside B's, B is strictly better.
  • Crossing Lines: Visualizes the trade-off (e.g., Subject A is better at Cost, but Subject B is better at Quality).
B. Scatter Plot ($n = 2$ Attributes)
  • Top-Right Corner (1,1): The "Ideal" zone (Best at both).
  • Bottom-Left (0,0): The worst zone.
  • Diagonal: Points along the diagonal ($y=x$) are balanced. Points far from the diagonal show a strong trade-off (e.g., High Cost, Low Quality).

2. Relative Power (Bar Chart)

Shows how much "stronger" a subject is compared to the weakest option.

  • Baseline (1.0x): The lowest-ranked subject.
  • Multiplier: If Subject A is 1.5x, it is 50% more preferred than the baseline.
  • Use Case: Helps decide if the winner is significantly better or just slightly better.

3. Probability (Donut Chart)

The "Market Share" of preference.

  • Sum = 100%: Total utility is distributed among subjects.
  • Interpretation: "If we ran this decision 100 times, Subject A is the best choice X times."

IFN Prioritization Laboratory

Example Scenarios

Data Configuration

Subjects ($m$): 3
Attributes ($n$): 2
$\beta$ (Attr): $\gamma$ (Subj):

💡 Click column headers to toggle Gain (Green) vs Cost (Red).

v1.8