storm.unfd03

Benchmark
id:main_clean_rbrmax2_N12-B1120-B211-__lvl11-__lvl20 (POMDP)
Invocation (unfd03)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const N=12,B1=120,B2=11 --timemem --statistics --revised --unfold-reward-bound --belief-exploration discretize --resolution 3 --triangulationmode static
Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 3
Execution
Walltime:3.3796441555023193s
Return code:0
Note(s):Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 3
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 09:39:27 2025
Command line arguments: --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const 'N=12,B1=120,B2=11' --timemem --statistics --revised --unfold-reward-bound --belief-exploration discretize --resolution 3 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.003s.

Time for model construction: 0.017s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	73
Transitions: 	146
Choices: 	98
Observations: 	2
Reward Models:  energy, clean
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * goal -> 1 item(s)
Choice Labels: 	3 labels
   * clean -> 25 item(s)
   * move -> 25 item(s)
   * consume -> 48 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=120 , rew{"clean"}>11 }"goal"]'
Perform explicit unfolding of reward bounds.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	26246
Transitions: 	52414
Choices: 	35784
Observations: 	2
Reward Models:  none
State Labels: 	5 labels
   * dim0_active -> 26173 item(s)
   * goal -> 1262 item(s)
   * dim1_active -> 158 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * move -> 9538 item(s)
   * consume -> 16708 item(s)
   * clean -> 9538 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [(true & "dim0_active") U (("goal" & "dim0_active") & "dim1_active")]
Time for pre-processing: 0.015s.
Exploring the belief MDP... 
Exploring the belief space...
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	942541
Transitions: 	2230320
Choices: 	1328078
Reward Models:  none
State Labels: 	3 labels
   * target -> 1 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Time for exploring beliefs: 2.014s.
Time for building the belief MDP: 0.112s.
Time for analyzing the belief MDP: 0.109s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	26246
Transitions: 	52366
Choices: 	35784
Observations: 	3
Reward Models:  none
State Labels: 	5 labels
   * dim0_active -> 26173 item(s)
   * goal -> 1262 item(s)
   * dim1_active -> 158 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * move -> 9538 item(s)
   * consume -> 16708 item(s)
   * clean -> 9538 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 16708
# Pre-computations detected that the belief MDP is finite.
# Total check time: 2.725s
##########################################

Result: ≤ 0.999999993
Time for POMDP analysis: 2.727s.

Performance statistics:
  * peak memory usage: 471MB
  * CPU time: 2.668s
  * wallclock time: 2.770s


############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 3