$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/incline/incline.prism --prop $BENCH_HOME/models/incline/incline.props rbrmax2 -const B1=75,B2=20 --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration unfold --size-threshold 16384
Storm-pomdp. Unfolds cost bounds, cost aware, with cutoffs and size threshold 2^14
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 14:16:34 2025
Command line arguments: --prism $BENCH_HOME/models/incline/incline.prism --prop $BENCH_HOME/models/incline/incline.props rbrmax2 -const 'B1=75,B2=20' --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration unfold --size-threshold 16384
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.002s.
Time for model construction: 0.007s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 25
Transitions: 110
Choices: 80
Observations: 9
Reward Models: steps, energy
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* goal -> 1 item(s)
Choice Labels: 4 labels
* north -> 20 item(s)
* east -> 20 item(s)
* south -> 20 item(s)
* west -> 20 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=75 , rew{"steps"}<=20 }"goal"]'
Perform explicit unfolding of reward bounds.
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 5092
Transitions: 22358
Choices: 16424
Observations: 29
Reward Models: energy, steps
State Labels: 5 labels
* goal -> 129 item(s)
* dim0_active -> 5025 item(s)
* dim1_active -> 5025 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 4 labels
* west -> 3966 item(s)
* south -> 3966 item(s)
* north -> 4246 item(s)
* east -> 4246 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [((true & "dim0_active") & "dim1_active") U (("goal" & "dim0_active") & "dim1_active")]
Time for pre-processing: 0.004s.
Exploring the belief MDP...
Exploring the belief space...
Exploration stopped before all beliefs were explored. 16386 beliefs discovered. 9580 beliefs explored.
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 16372
Transitions: 64546
Choices: 40239
Reward Models: none
State Labels: 3 labels
* target -> 1 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Time for exploring beliefs: 0.018s.
Time for building the belief MDP: 0.005s.
Time for analyzing the belief MDP: 0.006s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 5092
Transitions: 22276
Choices: 16424
Observations: 31
Reward Models: energy, steps
State Labels: 5 labels
* goal -> 129 item(s)
* dim0_active -> 5025 item(s)
* dim1_active -> 5025 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 4 labels
* west -> 3966 item(s)
* south -> 3966 item(s)
* north -> 4246 item(s)
* east -> 4246 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 740
# Pre-computations detected that the belief MDP is finite.
# Total check time: 0.068s
##########################################
Result: ≥ 0.953339849
Time for POMDP analysis: 0.068s.
Performance statistics:
* peak memory usage: 52MB
* CPU time: 0.080s
* wallclock time: 0.087s
############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, cost aware, with cutoffs and size threshold 2^14