Commit graph

78 commits

Author SHA1 Message Date
5d62e3cee8
Implement env from PhD-smflooding-scene 2024-08-29 16:43:29 +01:00
0f9f185983
dlr: add PARALLEL_READS env var, update docs 2023-11-30 16:33:22 +00:00
7869505cfb
dlr: add PREDICT_AS_ONE 2023-06-16 18:23:40 +01:00
3be38823db
dlr: fix another crash 2023-05-19 22:00:23 +01:00
3e42972fb0
dlr: add comment 2023-05-11 17:45:52 +01:00
535bd6f579
dlr: fix crash 2023-05-07 19:00:02 +01:00
ac040717e6
autoset the value of factor 2023-05-04 19:57:02 +01:00
214afcc914
dlr: steps per execution is ugh 2023-05-04 19:54:51 +01:00
f0a2e7c450
dlr: print values of new env vars 2023-05-04 19:42:40 +01:00
8593999eb6
dlr: add JIT_COMPILE 2023-05-04 18:22:18 +01:00
dddc08c663
dlr: set steps_per_execution to 16 by default 2023-05-04 18:13:08 +01:00
e2e6a56b40
dlr: add UPSAMPLE env var
...AND actually add the functionality this time!
2023-05-04 17:40:16 +01:00
6b17d45aad
dlr: plot all metrics 2023-03-22 17:41:34 +00:00
e565c36149
deeplabv3+: prepare for ConvNeXt 2023-03-14 21:51:41 +00:00
779b546897
dlr: log new env vars 2023-03-14 20:28:22 +00:00
623208ba6d
dlr: add env var for water thresholding 2023-03-14 20:18:39 +00:00
c5fc62c411
dlr CHANGE: Add optional log(cosh(dice_loss))
Ref https://doi.org/10.1109/cibcb48159.2020.9277638
2023-03-10 20:24:13 +00:00
8e76ac2864
dlr: typo 2023-03-10 17:40:16 +00:00
5e79825292
dlr: tf.Tensor → numpy array → list → json → disk 2023-03-10 17:34:49 +00:00
b287970032
dlr: missing import 2023-03-10 17:31:03 +00:00
e6cd0aa1b9
dlr: fix crash 2023-03-10 17:14:06 +00:00
cf37aeb11a
dlr: truncate jsonl before we start to avoid mixing things up 2023-03-10 17:11:10 +00:00
5fdf229d06
dlr: write raw outputs to jsonl file 2023-03-10 17:07:44 +00:00
d24381babb
dlr: make size fo prediction graph actually sensible 2023-03-09 19:54:27 +00:00
8d932757b5
dlr: variable name typo 2023-03-09 19:44:39 +00:00
b6ca885fe2
dlr: told you so 2023-03-09 19:43:35 +00:00
f9df19cfd5
dlr: next of several typos 2023-03-09 19:34:45 +00:00
96eae636ea
dir: add missing functions to .load() custom objs
apparently metrics are also required to be included here...
2023-03-09 19:26:57 +00:00
ad52ae9241
dir: also plot inputs 2023-03-09 19:13:25 +00:00
436ab78438
dlr: when predicting, also display heatmap
...of positive predictions
2023-03-09 18:54:28 +00:00
0734201107
dlr: tf graph changes 2023-03-03 22:44:49 +00:00
e9dcbe3863
dlr: fixup 2023-03-03 22:09:05 +00:00
5c6789bf40
meaniou: implement one-hot version
it expects sparse, but our output is one-hot.
2023-03-03 22:04:21 +00:00
3d051a8874
dlr: HACK: argmax to convert [64,128,128, 2] → [64,128,128] 2023-03-03 21:41:26 +00:00
94a32e7144
dlr: fix metrics 2023-03-03 20:37:22 +00:00
c7f96ab6ab
dlr: typo 2023-03-03 20:23:03 +00:00
06f956dc07
dlr: default LOSS, EPOCHS, and PREDICT_COUNT to better values
Ref recent experiments
2023-03-03 20:17:08 +00:00
b435cc54dd
dlr: add sensitivity (aka recall) and specificity metrics 2023-03-03 20:00:05 +00:00
d464c9f57d
dlr: add dice loss as metric
more metrics to go tho
2023-03-03 19:34:55 +00:00
f70083bea4
dlr eo: set custom_objects when loading model 2023-03-01 17:19:10 +00:00
4fd9feba4f
dlr eo: tidyup 2023-03-01 16:47:36 +00:00
69b5ae8838
dlr eo: this should fix it 2023-02-23 17:24:30 +00:00
9f1cee2927
dlr eo: cheese it by upsampling and then downsampling again 2023-02-23 16:47:00 +00:00
7bcf13f8d8
dlr: typo 2023-01-16 18:02:09 +00:00
0b31c9e700
log start time, end time, and elapsed
just in case...!
2023-01-16 17:30:20 +00:00
82e01da70b
fixup.... again
oops
2023-01-13 18:47:29 +00:00
7b10f5c5fe
dlr: add learning_rate env var 2023-01-13 18:29:39 +00:00
be77f035c8
dlr: add cross-entropy + dice loss fn option 2023-01-13 17:58:00 +00:00
b2a5acaf4e
dlr dsm: clash 2023-01-13 17:26:38 +00:00
3c4d1c5140
dlr: Add support for stripping isolated water pixels
That is, water pixels that have no other water pixels immediately adjacent thereto (diagonals count).
2023-01-13 16:57:26 +00:00