storm.belseqd06

Benchmark
id:main_obstcl_rbrmax2_B125-B27 (POMDP)
Invocation (belseqd06)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/obstcl/obstcl.prism --prop $BENCH_HOME/models/obstcl/obstcl.props rbrmax2 -const B1=25,B2=7 --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 6 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 6
Execution
Walltime:0.22055864334106445s
Return code:0
Note(s):Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 6
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 14:16:43 2025
Command line arguments: --prism $BENCH_HOME/models/obstcl/obstcl.prism --prop $BENCH_HOME/models/obstcl/obstcl.props rbrmax2 -const 'B1=25,B2=7' --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 6 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.005s.

Time for model construction: 0.018s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	25
Transitions: 	115
Choices: 	80
Observations: 	10
Reward Models:  steps, energy
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * goal -> 1 item(s)
Choice Labels: 	4 labels
   * north -> 20 item(s)
   * east -> 20 item(s)
   * south -> 20 item(s)
   * west -> 20 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"goal"]'
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	43
Transitions: 	202
Choices: 	139
Observations: 	23
Reward Models:  steps, energy
State Labels: 	3 labels
   * goal -> 2 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	4 labels
   * west -> 34 item(s)
   * south -> 35 item(s)
   * north -> 35 item(s)
   * east -> 35 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"goal"]
Time for pre-processing: 0.000s.
Exploring the belief MDP... 
Exploring the belief space...
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	1119
Transitions: 	16707
Choices: 	4280
Reward Models:  steps, energy
State Labels: 	3 labels
   * target -> 2 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true U^{rew{"energy"}<=25 , rew{"steps"}<=7 }"target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 2236 states. 1.99821269 times more states than the original belief MDP.
Merging of sink states resulted in a model with 2235 states.
Epoch model for epoch <_, _> is cyclic.
---------------------------------
Statistics:
---------------------------------
          #checked epochs: 83.
             overall Time: 0.016s.
Epoch Model building Time: 0.008s.
Epoch Model checking Time: 0.007s.
---------------------------------
Time for exploring beliefs: 0.007s.
Time for building the belief MDP: 0.001s.
Time for analyzing the belief MDP: 0.028s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	43
Transitions: 	202
Choices: 	139
Observations: 	23
Reward Models:  steps, energy
State Labels: 	3 labels
   * goal -> 2 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	4 labels
   * west -> 34 item(s)
   * south -> 35 item(s)
   * north -> 35 item(s)
   * east -> 35 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 9
# Total check time: 0.038s
##########################################

Result: ≤ 0.8701171875
Time for POMDP analysis: 0.038s.

Performance statistics:
  * peak memory usage: 52MB
  * CPU time: 0.057s
  * wallclock time: 0.068s


############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 6