storm.belseqd42

Benchmark
id:main_walk_rbrmax1_N120-B180 (POMDP)
Invocation (belseqd42)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const N=120,B1=80 --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 42 --triangulationmode static
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 42
Execution
Walltime:428.84615421295166s
Return code:-9
Note(s):Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 42
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 15:03:50 2025
Command line arguments: --prism $BENCH_HOME/models/walk/walk.prism --prop $BENCH_HOME/models/walk/walk.props rbrmax1 -const 'N=120,B1=80' --timemem --statistics --revised --reward-aware --belief-exploration discretize --resolution 42 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.001s.

Time for model construction: 0.007s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	244
Transitions: 	968
Choices: 	607
Observations: 	124
Reward Models:  obsCost
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * goal -> 1 item(s)
Choice Labels: 	3 labels
   * observe -> 121 item(s)
   * move -> 242 item(s)
   * stop -> 244 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true Urew{"obsCost"}<=80 "goal"]'
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	366
Transitions: 	1577
Choices: 	973
Observations: 	126
Reward Models:  obsCost
State Labels: 	3 labels
   * goal -> 1 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	3 labels
   * observe -> 243 item(s)
   * stop -> 366 item(s)
   * move -> 364 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [true Urew{"obsCost"}<=80 "goal"]
Time for pre-processing: 0.000s.
Exploring the belief MDP... 
Exploring the belief space...
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	11547144
Transitions: 	274437867
Choices: 	34641305
Reward Models:  obsCost
State Labels: 	3 labels
   * target -> 1 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true Urew{"obsCost"}<=80 "target"]' on the belief MDP...
Transformation of transition rewards resulted in a model with 17321334 states. 1.50005352 times more states than the original belief MDP.
Merging of sink states resulted in a model with 17321328 states.


############################## Notes ##############################
Storm-pomdp. Sequential approach, cost aware, with discretization and resolution 42