src.asqi.metric_expression¶
Safe metric expression evaluator for score card indicators.
Supports arithmetic operations and aggregation functions while maintaining security by using AST parsing instead of eval().
Attributes¶
Classes¶
Safe evaluator for metric expressions in score cards. |
Module Contents¶
- src.asqi.metric_expression.logger¶
- class src.asqi.metric_expression.MetricExpressionEvaluator¶
Safe evaluator for metric expressions in score cards.
Supports: - Arithmetic operators: +, -, *, / - Aggregation functions: min(), max(), avg() - Numeric literals and parentheses - Complex formulas: ‘0.7 * accuracy + 0.3 * relevance’
Does NOT support: - Code execution (no eval/exec) - Arbitrary function calls - Variable assignment - Imports or other Python statements
- Examples:
>>> evaluator = MetricExpressionEvaluator() >>> # Simple variable >>> evaluator.evaluate_expression("accuracy", {"accuracy": 0.85}) 0.85 >>> # Weighted average >>> evaluator.evaluate_expression("0.7 * a + 0.3 * b", {"a": 0.8, "b": 0.9}) 0.83 >>> # Min function >>> evaluator.evaluate_expression("min(x, y, z)", {"x": 0.9, "y": 0.7, "z": 0.8}) 0.7
- ALLOWED_OPS¶
- ALLOWED_FUNCTIONS¶
- parse_expression(expression: str) ast.Expression¶
Parse an expression string into an AST.
- Args:
expression: The expression string to parse
- Returns:
Parsed AST expression
- Raises:
MetricExpressionError: If parsing fails or contains unsafe operations
- evaluate_expression(expression: str, metric_values: Dict[str, int | float]) int | float¶
Evaluate a metric expression with provided metric values.
- Args:
expression: The expression string to evaluate metric_values: Dictionary mapping metric paths to their numeric values
- Returns:
The computed numeric result
- Raises:
MetricExpressionError: If evaluation fails
- Example:
>>> evaluator.evaluate_expression( ... "0.5 * accuracy + 0.5 * relevance", ... {"accuracy": 0.8, "relevance": 0.9} ... ) 0.85