storm.unfd04

Benchmark
id:main_walk_rbrmax1_N120-B180 (POMDP)
Invocation (unfd04)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const N=120,B1=80 --timemem --statistics --revised --unfold-reward-bound --belief-exploration discretize --resolution 4 --triangulationmode static
Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 4
Execution
Walltime:0.5127027034759521s
Return code:0
Note(s):Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 4
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 15:04:02 2025
Command line arguments: --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const 'N=120,B1=80' --timemem --statistics --revised --unfold-reward-bound --belief-exploration discretize --resolution 4 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.001s.

Time for model construction: 0.008s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	244
Transitions: 	968
Choices: 	607
Observations: 	124
Reward Models:  obsCost
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * goal -> 1 item(s)
Choice Labels: 	3 labels
   * observe -> 121 item(s)
   * move -> 242 item(s)
   * stop -> 244 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true Urew{"obsCost"}<=80 "goal"]'
Perform explicit unfolding of reward bounds.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	19887
Transitions: 	79014
Choices: 	49532
Observations: 	124
Reward Models:  none
State Labels: 	4 labels
   * dim0_active -> 19643 item(s)
   * goal -> 82 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * observe -> 9922 item(s)
   * stop -> 19887 item(s)
   * move -> 19723 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [(true & "dim0_active") U ("goal" & "dim0_active")]
Time for pre-processing: 0.014s.
Exploring the belief MDP... 
Exploring the belief space...
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	58720
Transitions: 	349826
Choices: 	166193
Reward Models:  none
State Labels: 	3 labels
   * target -> 1 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Time for exploring beliefs: 0.127s.
Time for building the belief MDP: 0.010s.
Time for analyzing the belief MDP: 0.187s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	19887
Transitions: 	78653
Choices: 	49532
Observations: 	125
Reward Models:  none
State Labels: 	4 labels
   * dim0_active -> 19643 item(s)
   * goal -> 82 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * observe -> 9922 item(s)
   * stop -> 19887 item(s)
   * move -> 19723 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 9922
# Total check time: 0.453s
##########################################

Result: ≤ 0.9797989585
Time for POMDP analysis: 0.454s.

Performance statistics:
  * peak memory usage: 93MB
  * CPU time: 0.471s
  * wallclock time: 0.483s


############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, not cost aware, with discretization and resolution 4