$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/resrc/resrc.prism --prop $BENCH_HOME/models/resrc/resrc.props rbrmax3 -const B1=15,B2=15,B3=180 --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration discretize --resolution 1 --triangulationmode static
Storm-pomdp. Unfolds cost bounds, cost aware, with discretization and resolution 1
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 14:43:17 2025
Command line arguments: --prism $BENCH_HOME/models/resrc/resrc.prism --prop $BENCH_HOME/models/resrc/resrc.props rbrmax3 -const 'B1=15,B2=15,B3=180' --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration discretize --resolution 1 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.007s.
Time for model construction: 0.020s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 721
Transitions: 5041
Choices: 2881
Observations: 155
Reward Models: steps, gold, gem
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* ((x = 3) & (y = 1)) -> 16 item(s)
Choice Labels: 5 labels
* place -> 1 item(s)
* right -> 720 item(s)
* left -> 720 item(s)
* up -> 720 item(s)
* down -> 720 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"gold"}>=15 , rew{"gem"}>=15 , rew{"steps"}<=180 }((x = 3) & (y = 1))]'
Perform explicit unfolding of reward bounds.
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 6460384
Transitions: 45268949
Choices: 25841533
Observations: 283
Reward Models: gem, gold, steps
State Labels: 6 labels
* dim2_active -> 6459640 item(s)
* dim0_active -> 146461 item(s)
* dim1_active -> 59516 item(s)
* deadlock -> 0 item(s)
* ((x = 3) & (y = 1)) -> 346241 item(s)
* init -> 1 item(s)
Choice Labels: 5 labels
* up -> 6460383 item(s)
* right -> 6460383 item(s)
* place -> 1 item(s)
* left -> 6460383 item(s)
* down -> 6460383 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [(true & "dim2_active") U (((((x = 3) & (y = 1)) & "dim0_active") & "dim1_active") & "dim2_active")]
Time for pre-processing: 22.777s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 6460010
Transitions: 45265195
Choices: 25840031
Reward Models: none
State Labels: 3 labels
* target -> 1 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Time for exploring beliefs: 18.833s.
Time for building the belief MDP: 1.143s.
Time for analyzing the belief MDP: 0.478s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 6460384
Transitions: 45266717
Choices: 25841533
Observations: 284
Reward Models: gem, gold, steps
State Labels: 6 labels
* dim2_active -> 6459640 item(s)
* dim0_active -> 146461 item(s)
* dim1_active -> 59516 item(s)
* deadlock -> 0 item(s)
* ((x = 3) & (y = 1)) -> 346241 item(s)
* init -> 1 item(s)
Choice Labels: 5 labels
* up -> 6460383 item(s)
* right -> 6460383 item(s)
* place -> 1 item(s)
* left -> 6460383 item(s)
* down -> 6460383 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 75845
# Pre-computations detected that the belief MDP is finite.
# Total check time: 75.680s
##########################################
Result: ≤ 0.01336346101
Time for POMDP analysis: 75.897s.
Performance statistics:
* peak memory usage: 10963MB
* CPU time: 92.935s
* wallclock time: 98.716s
############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, cost aware, with discretization and resolution 1