$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/water/water_grid_repeat.prism --prop $BENCH_HOME/models/water/water.props rbrmax2 -const B1=1790,B2=150 --timemem --statistics --revised --reward-aware --belief-exploration unfold --size-threshold 256
Storm-pomdp. Sequential approach, cost aware, with cutoffs and size threshold 2^8
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 15:30:00 2025
Command line arguments: --prism $BENCH_HOME/models/water/water_grid_repeat.prism --prop $BENCH_HOME/models/water/water.props rbrmax2 -const 'B1=1790,B2=150' --timemem --statistics --revised --reward-aware --belief-exploration unfold --size-threshold 256
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.002s.
Time for model construction: 0.010s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 34
Transitions: 132
Choices: 130
Observations: 5
Reward Models: homeReached, energy
State Labels: 2 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 5 labels
* flip -> 2 item(s)
* north -> 32 item(s)
* east -> 32 item(s)
* west -> 32 item(s)
* south -> 32 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=1790 , rew{"homeReached"}>=150 }true]'
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 67
Transitions: 258
Choices: 253
Observations: 23
Reward Models: homeReached, energy
State Labels: 2 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 5 labels
* west -> 62 item(s)
* south -> 62 item(s)
* north -> 62 item(s)
* flip -> 5 item(s)
* east -> 62 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [true U^{rew{"energy"}<=1790 , rew{"homeReached"}>=150 }true]
Time for pre-processing: 0.000s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 84
Transitions: 331
Choices: 315
Reward Models: homeReached, energy
State Labels: 3 labels
* target -> 83 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=1790 , rew{"homeReached"}>=150 }"target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 165 states. 1.964285714 times more states than the original belief MDP.
Merging of sink states resulted in a model with 164 states.
---------------------------------
Statistics:
---------------------------------
#checked epochs: 259267.
overall Time: 4.076s.
Epoch Model building Time: 1.996s.
Epoch Model checking Time: 1.088s.
---------------------------------
Time for exploring beliefs: 0.000s.
Time for building the belief MDP: 0.000s.
Time for analyzing the belief MDP: 4.259s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 67
Transitions: 258
Choices: 253
Observations: 23
Reward Models: homeReached, energy
State Labels: 2 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 5 labels
* west -> 62 item(s)
* south -> 62 item(s)
* north -> 62 item(s)
* flip -> 5 item(s)
* east -> 62 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 14
# Pre-computations detected that the belief MDP is finite.
# Total check time: 4.259s
##########################################
Result: ≥ 0.180755753
Time for POMDP analysis: 4.259s.
Performance statistics:
* peak memory usage: 51MB
* CPU time: 4.265s
* wallclock time: 4.276s
############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with cutoffs and size threshold 2^8