storm.caunfd25

Benchmark
id:main_rover_rbrmax3_B1200-B20360-B3200 (POMDP)
Invocation (caunfd25)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/rover/rover.prism --prop $BENCH_HOME/models/rover/rover.props rbrmax3 -const B1=200,B2=360,B3=200 --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration discretize --resolution 25 --triangulationmode static
Storm-pomdp. Unfolds cost bounds, cost aware, with discretization and resolution 25
Execution
Walltime:437.52187943458557s
Return code:0
Note(s):Storm-pomdp. Unfolds cost bounds, cost aware, with discretization and resolution 25
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 14:43:28 2025
Command line arguments: --prism $BENCH_HOME/models/rover/rover.prism --prop $BENCH_HOME/models/rover/rover.props rbrmax3 -const 'B1=200,B2=360,B3=200' --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration discretize --resolution 25 --triangulationmode static
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.002s.

Time for model construction: 0.008s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	16
Transitions: 	30
Choices: 	20
Observations: 	9
Reward Models:  value, time, energy
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * done -> 1 item(s)
Choice Labels: 	6 labels
   * done -> 2 item(s)
   * task_done -> 14 item(s)
   * task1_start -> 1 item(s)
   * task2_start -> 1 item(s)
   * task4_start -> 1 item(s)
   * task3_start -> 1 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true U^{rew{"value"}>=200 , rew{"time"}<=360 , rew{"energy"}<=200 }(done)]'
Perform explicit unfolding of reward bounds.
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	18847535
Transitions: 	136382295
Choices: 	52428895
Observations: 	26
Reward Models:  energy, time, value
State Labels: 	6 labels
   * done -> 696813 item(s)
   * dim2_active -> 18847507 item(s)
   * dim0_active -> 179933 item(s)
   * dim1_active -> 18847507 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	6 labels
   * task3_start -> 8395340 item(s)
   * task4_start -> 8395340 item(s)
   * task2_start -> 8395340 item(s)
   * task1_start -> 8395340 item(s)
   * task_done -> 9755382 item(s)
   * done -> 9092153 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [((true & "dim1_active") & "dim2_active") U ((((done) & "dim0_active") & "dim1_active") & "dim2_active")]
Time for pre-processing: 23.468s.
Exploring the belief MDP... 
Exploring the belief space...
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	15356900
Transitions: 	97186203
Choices: 	48938260
Reward Models:  none
State Labels: 	3 labels
   * target -> 1 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Time for exploring beliefs: 58.896s.
Time for building the belief MDP: 2.892s.
Time for analyzing the belief MDP: 51.981s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	18847535
Transitions: 	136382165
Choices: 	52428895
Observations: 	27
Reward Models:  energy, time, value
State Labels: 	6 labels
   * done -> 696813 item(s)
   * dim2_active -> 18847507 item(s)
   * dim0_active -> 179933 item(s)
   * dim1_active -> 18847507 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	6 labels
   * task3_start -> 8395340 item(s)
   * task4_start -> 8395340 item(s)
   * task2_start -> 8395340 item(s)
   * task1_start -> 8395340 item(s)
   * task_done -> 9755382 item(s)
   * done -> 9092153 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 1393626
# Pre-computations detected that the belief MDP is finite.
# Total check time: 411.188s
##########################################

Result: ≤ 0.8609527907
Time for POMDP analysis: 413.560s.

Performance statistics:
  * peak memory usage: 42573MB
  * CPU time: 408.628s
  * wallclock time: 437.054s


############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, cost aware, with discretization and resolution 25