$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/obstcl/obstcl.prism --prop $BENCH_HOME/models/obstcl/obstcl.props rbrmax2 -const B1=25,B2=7 --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 25 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 25
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 14:16:44 2025
Command line arguments: --prism $BENCH_HOME/models/obstcl/obstcl.prism --prop $BENCH_HOME/models/obstcl/obstcl.props rbrmax2 -const 'B1=25,B2=7' --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 25 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.002s.
Time for model construction: 0.009s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 25
Transitions: 115
Choices: 80
Observations: 10
Reward Models: steps, energy
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* goal -> 1 item(s)
Choice Labels: 4 labels
* north -> 20 item(s)
* east -> 20 item(s)
* south -> 20 item(s)
* west -> 20 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"goal"]'
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 43
Transitions: 202
Choices: 139
Observations: 23
Reward Models: steps, energy
State Labels: 3 labels
* goal -> 2 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 4 labels
* west -> 34 item(s)
* south -> 35 item(s)
* north -> 35 item(s)
* east -> 35 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"goal"]
Time for pre-processing: 0.000s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 67718
Transitions: 2269789
Choices: 268744
Reward Models: steps, energy
State Labels: 3 labels
* target -> 2 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 135434 states. 1.999970466 times more states than the original belief MDP.
Merging of sink states resulted in a model with 135433 states.
Epoch model for epoch <_, _> is cyclic.
---------------------------------
Statistics:
---------------------------------
#checked epochs: 83.
overall Time: 1.916s.
Epoch Model building Time: 1.254s.
Epoch Model checking Time: 0.661s.
---------------------------------
Time for exploring beliefs: 1.202s.
Time for building the belief MDP: 0.259s.
Time for analyzing the belief MDP: 3.344s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 43
Transitions: 202
Choices: 139
Observations: 23
Reward Models: steps, energy
State Labels: 3 labels
* goal -> 2 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 4 labels
* west -> 34 item(s)
* south -> 35 item(s)
* north -> 35 item(s)
* east -> 35 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 9
# Total check time: 4.876s
##########################################
Result: ≤ 0.8701171875
Time for POMDP analysis: 4.915s.
Performance statistics:
* peak memory usage: 745MB
* CPU time: 4.716s
* wallclock time: 4.929s
############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 25