$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const N=12,B1=120,B2=11 --timemem --statistics --revised --reward-aware 1,0 --belief-exploration discretize --resolution 4 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 4
Storm-POMDP 1.9.1 (dev)
Date: Mon Feb 10 09:39:23 2025
Command line arguments: --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const 'N=12,B1=120,B2=11' --timemem --statistics --revised --reward-aware '1,0' --belief-exploration discretize --resolution 4 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb
Time for model input parsing: 0.003s.
Time for model construction: 0.017s.
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 73
Transitions: 146
Choices: 98
Observations: 2
Reward Models: energy, clean
State Labels: 3 labels
* deadlock -> 0 item(s)
* init -> 1 item(s)
* goal -> 1 item(s)
Choice Labels: 3 labels
* clean -> 25 item(s)
* move -> 25 item(s)
* consume -> 48 item(s)
--------------------------------------------------------------
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=120 , rew{"clean"}>11 }"goal"]'
Perform unfolding for observation levels.
bounded reachability processing done. POMDP Information:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 728
Transitions: 2002
Choices: 1144
Observations: 7
Reward Models: dim0_levelReward
State Labels: 4 labels
* goal -> 26 item(s)
* dim1_active -> 2 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* move -> 416 item(s)
* consume -> 312 item(s)
* clean -> 416 item(s)
--------------------------------------------------------------
Transformed formula: Pmax=? [true Urew{"dim0_levelReward"}<=120 ("goal" & "dim1_active")]
Time for pre-processing: 0.000s.
Exploring the belief MDP...
Exploring the belief space...
Constructing the belief MDP...
--------------------------------------------------------------
Model type: MDP (sparse)
States: 1057
Transitions: 2549
Choices: 1825
Reward Models: dim0_levelReward
State Labels: 3 labels
* target -> 2 item(s)
* init -> 1 item(s)
* bottom -> 1 item(s)
Choice Labels: none
--------------------------------------------------------------
Analyzing property 'Pmax=? [true Urew{"dim0_levelReward"}<=120 "target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 1800 states. 1.702932829 times more states than the original belief MDP.
Merging of sink states resulted in a model with 279 states.
Epoch model for epoch <_> is cyclic.
---------------------------------
Statistics:
---------------------------------
#checked epochs: 122.
overall Time: 0.001s.
Epoch Model building Time: 0.000s.
Epoch Model checking Time: 0.000s.
---------------------------------
Time for exploring beliefs: 0.001s.
Time for building the belief MDP: 0.000s.
Time for analyzing the belief MDP: 0.002s.
##### POMDP Approximation Statistics ######
# Input model:
--------------------------------------------------------------
Model type: POMDP (sparse)
States: 728
Transitions: 2002
Choices: 1144
Observations: 9
Reward Models: dim0_levelReward
State Labels: 4 labels
* goal -> 26 item(s)
* dim1_active -> 2 item(s)
* deadlock -> 0 item(s)
* init -> 1 item(s)
Choice Labels: 3 labels
* move -> 416 item(s)
* consume -> 312 item(s)
* clean -> 416 item(s)
--------------------------------------------------------------
# Max. Number of states with same observation: 312
# Total check time: 0.005s
##########################################
Result: ≤ 0.9999996074
Time for POMDP analysis: 0.005s.
Performance statistics:
* peak memory usage: 51MB
* CPU time: 0.018s
* wallclock time: 0.033s
############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 4