storm.caunfc11

Benchmark
id:main_serv_rbrmax1_B11000 (POMDP)
Invocation (caunfc11)
$BENCH_HOME/bin/storm-pomdp --prism $BENCH_HOME/models/serv/serv.prism --prop $BENCH_HOME/models/serv/serv.props rbrmax1 -const B1=1000 --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration unfold --size-threshold 2048
Storm-pomdp. Unfolds cost bounds, cost aware, with cutoffs and size threshold 2^11
Execution
Walltime:16.37742781639099s
Return code:0
Note(s):Storm-pomdp. Unfolds cost bounds, cost aware, with cutoffs and size threshold 2^11
Log
Storm-POMDP 1.9.1 (dev)

Date: Mon Feb 10 15:03:43 2025
Command line arguments: --prism $BENCH_HOME/models/serv/serv.prism --prop $BENCH_HOME/models/serv/serv.props rbrmax1 -const B1=1000 --timemem --statistics --revised --reward-aware --unfold-reward-bound --belief-exploration unfold --size-threshold 2048
Current working directory: $BENCH_HOME/experiments64gb

Time for model input parsing: 0.006s.

Time for model construction: 0.717s.

-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	76928
Transitions: 	716800
Choices: 	571264
Observations: 	6016
Reward Models:  time
State Labels: 	3 labels
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
   * taskdone -> 384 item(s)
Choice Labels: 	9 labels
   * north -> 62208 item(s)
   * east -> 62208 item(s)
   * south -> 62208 item(s)
   * wait_delay -> 14720 item(s)
   * west -> 62208 item(s)
   * check_water -> 76928 item(s)
   * deliver_water -> 76928 item(s)
   * get_water -> 76928 item(s)
   * interact -> 76928 item(s)
-------------------------------------------------------------- 
Analyzing property 'Pmax=? [true Urew{"time"}<=1000 "taskdone"]'
Perform explicit unfolding of reward bounds.
Extend observation function to become reward aware.
bounded reachability processing done. POMDP Information:
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	1124833
Transitions: 	10708717
Choices: 	8390924
Observations: 	33316
Reward Models:  time
State Labels: 	4 labels
   * taskdone -> 2792 item(s)
   * dim0_active -> 735969 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	9 labels
   * west -> 922253 item(s)
   * wait_delay -> 202580 item(s)
   * south -> 922253 item(s)
   * north -> 922253 item(s)
   * interact -> 1124833 item(s)
   * get_water -> 1124833 item(s)
   * east -> 922253 item(s)
   * deliver_water -> 1124833 item(s)
   * check_water -> 1124833 item(s)
-------------------------------------------------------------- 
Transformed formula: Pmax=? [(true & "dim0_active") U ("taskdone" & "dim0_active")]
Time for pre-processing: 1.670s.
Exploring the belief MDP... 
Exploring the belief space...
Exploration stopped before all beliefs were explored. 2049 beliefs discovered. 406 beliefs explored.
Constructing the belief MDP...
-------------------------------------------------------------- 
Model type: 	MDP (sparse)
States: 	2051
Transitions: 	25229
Choices: 	14379
Reward Models:  none
State Labels: 	3 labels
   * target -> 1 item(s)
   * init -> 1 item(s)
   * bottom -> 1 item(s)
Choice Labels: 	none
-------------------------------------------------------------- 
Time for exploring beliefs: 0.001s.
Time for building the belief MDP: 0.001s.
Time for analyzing the belief MDP: 0.002s.
##### POMDP Approximation Statistics ######
# Input model: 
-------------------------------------------------------------- 
Model type: 	POMDP (sparse)
States: 	1124833
Transitions: 	9927533
Choices: 	8390924
Observations: 	33347
Reward Models:  time
State Labels: 	4 labels
   * taskdone -> 2792 item(s)
   * dim0_active -> 735969 item(s)
   * deadlock -> 0 item(s)
   * init -> 1 item(s)
Choice Labels: 	9 labels
   * west -> 922253 item(s)
   * wait_delay -> 202580 item(s)
   * south -> 922253 item(s)
   * north -> 922253 item(s)
   * interact -> 1124833 item(s)
   * get_water -> 1124833 item(s)
   * east -> 922253 item(s)
   * deliver_water -> 1124833 item(s)
   * check_water -> 1124833 item(s)
-------------------------------------------------------------- 
# Max. Number of states with same observation: 1377
# Pre-computations detected that the belief MDP is finite.
# Total check time: 13.362s
##########################################

Result: ≥ 0.000224844424
Time for POMDP analysis: 13.559s.

Performance statistics:
  * peak memory usage: 4521MB
  * CPU time: 14.582s
  * wallclock time: 16.186s


############################## Notes ##############################
Storm-pomdp. Unfolds cost bounds, cost aware, with cutoffs and size threshold 2^11