storm.belseqc25

Benchmark
id:lvls_clean_rbrmax2_N6-B160-B25-__lvl115-__lvl20 (POMDP)
Invocation (belseqc25)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const N=6,B1=60,B2=5 --timemem --statistics --revised --reward-aware 15,0 --belief-exploration unfold --size-threshold 33554432
Storm-pomdp. Sequential approach, cost aware, with cutoffs and size threshold 2^25
Execution
Walltime:366.2212064266205s
Return code:0
Note(s):Storm-pomdp. Sequential approach, cost aware, with cutoffs and size threshold 2^25
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 10:36:15 2025
Command line arguments: --prism $BENCH_HOME/models/clean/clean.prism --prop $BENCH_HOME/models/clean/clean.props rbrmax2 -const 'N=6,B1=60,B2=5' --timemem --statistics --revised --reward-aware '15,0' --belief-exploration unfold --size-threshold 33554432
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.001s.

Time for model construction: 0.007s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	37
Transitions: 	74
Choices: 	50
Observations: 	2
Reward Models:  energy, clean
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * goal -> 1 item(s)
Choice Labels: 	3 labels
   * clean -> 13 item(s)
   * move -> 13 item(s)
   * consume -> 24 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=60 , rew{"clean"}>5 }"goal"]'
Perform unfolding for observation levels.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	2107
Transitions: 	4445
Choices: 	2954
Observations: 	5
Reward Models:  dim0_levelReward
State Labels: 	4 labels
   * goal -> 112 item(s)
   * dim1_active -> 16 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * move -> 847 item(s)
   * consume -> 1260 item(s)
   * clean -> 847 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [true Urew{"dim0_levelReward"}<=4 ("goal" & "dim1_active")]
Time for pre-processing: 0.001s.
Exploring the belief MDP... 
Exploring the belief space...
Exploration stopped before all beliefs were explored. 33554433 beliefs discovered. 26405607 beliefs explored.
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	33554434
Transitions: 	60762982
Choices: 	52037416
Reward Models:  dim0_levelReward
State Labels: 	3 labels
   * target -> 44773 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true Urew{"dim0_levelReward"}<=4 "target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 35869620 states. 1.068997915 times more states than the original belief MDP.
Merging of sink states resulted in a model with 5126033 states.
---------------------------------
Statistics:
---------------------------------
          #checked epochs: 6.
             overall Time: 25.629s.
Epoch Model building Time: 23.002s.
Epoch Model checking Time: 2.626s.
---------------------------------
Time for exploring beliefs: 241.822s.
Time for building the belief MDP: 36.145s.
Time for analyzing the belief MDP: 52.893s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	2107
Transitions: 	4445
Choices: 	2954
Observations: 	7
Reward Models:  dim0_levelReward
State Labels: 	4 labels
   * goal -> 112 item(s)
   * dim1_active -> 16 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * move -> 847 item(s)
   * consume -> 1260 item(s)
   * clean -> 847 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 1260
# Total check time: 364.864s
##########################################

Result: ≥ 0.9004917044
Time for POMDP analysis: 364.864s.

Performance statistics:
  * peak memory usage: 61870MB
  * CPU time: 331.805s
  * wallclock time: 364.876s


############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with cutoffs and size threshold 2^25