$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const N=40,B1=80 --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 36 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 36
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 15:03:44 2025
Command line arguments: --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const 'N=40,B1=80' --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 36 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.002s.
Time for model construction: 0.012s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 84
Transitions: 328
Choices: 207
Observations: 44
Reward Models: obsCost
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* goal -> 1 item(s)
Choice Labels: 3 labels
* observe -> 41 item(s)
* move -> 82 item(s)
* stop -> 84 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true Urew{"obsCost"}<=80 "goal"]'
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 126
Transitions: 537
Choices: 333
Observations: 47
Reward Models: obsCost
State Labels: 3 labels
* goal -> 1 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* observe -> 83 item(s)
* stop -> 126 item(s)
* move -> 124 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [true Urew{"obsCost"}<=80 "goal"]
Time for pre-processing: 0.000s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 747266
Transitions: 14015917
Choices: 2241751
Reward Models: obsCost
State Labels: 3 labels
* target -> 1 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Analyzing property 'Pmax=? [true Urew{"obsCost"}<=80 "target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 1120918 states. 1.500025426 times more states than the original belief MDP.
Merging of sink states resulted in a model with 1120912 states.
Epoch model for epoch <_> is cyclic.
Epoch model for epoch <0> is cyclic.
---------------------------------
Statistics:
---------------------------------
#checked epochs: 82.
overall Time: 220.068s.
Epoch Model building Time: 5.847s.
Epoch Model checking Time: 214.220s.
---------------------------------
Time for exploring beliefs: 11.304s.
Time for building the belief MDP: 1.197s.
Time for analyzing the belief MDP: 225.093s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 126
Transitions: 537
Choices: 333
Observations: 47
Reward Models: obsCost
State Labels: 3 labels
* goal -> 1 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* observe -> 83 item(s)
* stop -> 126 item(s)
* move -> 124 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 41
# Total check time: 238.322s
##########################################
Result: ≤ 0.9323304599
Time for POMDP analysis: 238.735s.
Performance statistics:
* peak memory usage: 4072MB
* CPU time: 237.270s
* wallclock time: 238.757s
############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 36