$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const N=6,B1=60,B2=5 --timemem --statistics --revised --reward-aware 10,0 --belief-exploration discretize --resolution 6 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 6
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 10:30:44 2025
Command line arguments: --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const 'N=6,B1=60,B2=5' --timemem --statistics --revised --reward-aware '10,0' --belief-exploration discretize --resolution 6 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.002s.
Time for model construction: 0.010s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 37
Transitions: 74
Choices: 50
Observations: 2
Reward Models: energy, clean
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* goal -> 1 item(s)
Choice Labels: 3 labels
* clean -> 13 item(s)
* move -> 13 item(s)
* consume -> 24 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=60 , rew{"clean"}>5 }"goal"]'
Perform unfolding for observation levels.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 758
Transitions: 1627
Choices: 1096
Observations: 5
Reward Models: dim0_levelReward
State Labels: 4 labels
* goal -> 77 item(s)
* dim1_active -> 11 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* move -> 338 item(s)
* consume -> 420 item(s)
* clean -> 338 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [true Urew{"dim0_levelReward"}<=6 ("goal" & "dim1_active")]
Time for pre-processing: 0.000s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 1688339
Transitions: 6192837
Choices: 2384726
Reward Models: dim0_levelReward
State Labels: 3 labels
* target -> 68 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Analyzing property 'Pmax=? [true Urew{"dim0_levelReward"}<=6 "target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 1836189 states. 1.087571276 times more states than the original belief MDP.
Merging of sink states resulted in a model with 1077946 states.
Epoch model for epoch <_> is cyclic.
---------------------------------
Statistics:
---------------------------------
#checked epochs: 8.
overall Time: 16.892s.
Epoch Model building Time: 1.673s.
Epoch Model checking Time: 15.218s.
---------------------------------
Time for exploring beliefs: 7.852s.
Time for building the belief MDP: 0.704s.
Time for analyzing the belief MDP: 19.314s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 758
Transitions: 1627
Choices: 1096
Observations: 7
Reward Models: dim0_levelReward
State Labels: 4 labels
* goal -> 77 item(s)
* dim1_active -> 11 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* move -> 338 item(s)
* consume -> 420 item(s)
* clean -> 338 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 420
# Total check time: 28.951s
##########################################
Result: ≤ 0.9994216766
Time for POMDP analysis: 29.178s.
Performance statistics:
* peak memory usage: 2101MB
* CPU time: 28.536s
* wallclock time: 29.194s
############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 6