|
45c76ba252
|
typo
|
2023-03-03 22:46:02 +00:00 |
|
|
c909cfd3d1
|
fixup
|
2023-03-03 22:45:34 +00:00 |
|
|
0734201107
|
dlr: tf graph changes
|
2023-03-03 22:44:49 +00:00 |
|
|
750f46dbd2
|
debug
|
2023-03-03 22:39:30 +00:00 |
|
|
5472729f5e
|
dlr: fixup argmax & y_true/y_pred
|
2023-03-03 22:37:36 +00:00 |
|
|
bc734a29c6
|
y_true is one-hot, convert to sparse
|
2023-03-03 22:20:11 +00:00 |
|
|
c7b577ab29
|
specificity: convert to plaintf
|
2023-03-03 22:16:48 +00:00 |
|
|
26cc824ace
|
dlr: MeanIoU fixup
|
2023-03-03 22:10:49 +00:00 |
|
|
e9dcbe3863
|
dlr: fixup
|
2023-03-03 22:09:05 +00:00 |
|
|
5c6789bf40
|
meaniou: implement one-hot version
it expects sparse, but our output is one-hot.
|
2023-03-03 22:04:21 +00:00 |
|
|
6ffda40d48
|
fixup
|
2023-03-03 21:54:45 +00:00 |
|
|
9b13e9ca5b
|
dlr: fixup argmax first
|
2023-03-03 21:51:24 +00:00 |
|
|
7453c607ed
|
argmax for sensitivity & specificity too
|
2023-03-03 21:49:33 +00:00 |
|
|
8470aec996
|
dlr: fixup
|
2023-03-03 21:45:51 +00:00 |
|
|
3d051a8874
|
dlr: HACK: argmax to convert [64,128,128, 2] → [64,128,128]
|
2023-03-03 21:41:26 +00:00 |
|
|
94a32e7144
|
dlr: fix metrics
|
2023-03-03 20:37:22 +00:00 |
|
|
c7f96ab6ab
|
dlr: typo
|
2023-03-03 20:23:03 +00:00 |
|
|
06f956dc07
|
dlr: default LOSS, EPOCHS, and PREDICT_COUNT to better values
Ref recent experiments
|
2023-03-03 20:17:08 +00:00 |
|
|
b435cc54dd
|
dlr: add sensitivity (aka recall) and specificity metrics
|
2023-03-03 20:00:05 +00:00 |
|
|
483ecf11c8
|
add specificity metric
|
2023-03-03 19:35:20 +00:00 |
|
|
d464c9f57d
|
dlr: add dice loss as metric
more metrics to go tho
|
2023-03-03 19:34:55 +00:00 |
|
|
f70083bea4
|
dlr eo: set custom_objects when loading model
|
2023-03-01 17:19:10 +00:00 |
|
|
b5f23e76d1
|
dlr eo: allow setting DIR_OUTPUT directly
|
2023-03-01 16:54:15 +00:00 |
|
|
4fd9feba4f
|
dlr eo: tidyup
|
2023-03-01 16:47:36 +00:00 |
|
|
69b5ae8838
|
dlr eo: this should fix it
|
2023-02-23 17:24:30 +00:00 |
|
|
9f1cee2927
|
dlr eo: cheese it by upsampling and then downsampling again
|
2023-02-23 16:47:00 +00:00 |
|
|
96b94ec55b
|
upsampling test
|
2023-02-23 16:19:44 +00:00 |
|
|
747ddfd41b
|
weird, XLA_FLAGS cuda data dir wasn't needed before
libdevice not found at ./libdevice.10.bc
|
2023-02-10 13:28:34 +00:00 |
|
|
e43274cd91
|
dlr eo: add VAL_STEPS_PER_EPOCH
|
2023-02-03 16:41:30 +00:00 |
|
|
8446a842d1
|
typo
|
2023-02-03 16:01:54 +00:00 |
|
|
1a8f10339a
|
LayerConvNeXtGamma: fix for mixed precision mode
|
2023-02-02 16:22:08 +00:00 |
|
|
a630db2c49
|
dlr eo: fixup
|
2023-02-02 16:17:52 +00:00 |
|
|
2bf1872aca
|
dlr eo: add JIT_COMPILE and MIXED_PRECISION
|
2023-02-02 16:14:09 +00:00 |
|
|
71088b8c0b
|
typo
|
2023-02-02 15:48:49 +00:00 |
|
|
f7666865a0
|
dlr eo: add STEPS_PER_EXECUTION
|
2023-02-02 15:47:08 +00:00 |
|
|
f8202851a1
|
dlr eo: add LEARNING_RATE
|
2023-01-27 16:51:13 +00:00 |
|
|
fb898ea72b
|
slurm eo: seriously....?
|
2023-01-26 17:02:33 +00:00 |
|
|
be946091b1
|
slurm eo: DIR_OUTPUT → DIRPATH_OUTPUT
|
2023-01-26 16:52:14 +00:00 |
|
|
c26a937cdd
|
Ignore Kate swap files
|
2023-01-20 20:33:32 +00:00 |
|
|
4703bdbea1
|
SLURM: add job file for encoderonly
It's pretty much bugfixed, but illykin doesn't have enough RAM to support it at the moment :-(
|
2023-01-20 20:32:35 +00:00 |
|
|
818d77c733
|
Make dirpath_rainfallwater consistent with other experiments
|
2023-01-20 20:31:26 +00:00 |
|
|
e72d3991b8
|
switch to a smaller ConvNeXt
|
2023-01-20 19:14:38 +00:00 |
|
|
e1ad16a213
|
debug A
|
2023-01-20 18:58:45 +00:00 |
|
|
65a2e16a4c
|
ds_eo: lower memory usage
|
2023-01-20 18:55:52 +00:00 |
|
|
b5e68fc1a3
|
eo: don't downsample ConvNeXt at beginning
|
2023-01-20 18:49:46 +00:00 |
|
|
d5fdab50ed
|
dlreo: missing import
|
2023-01-20 18:40:35 +00:00 |
|
|
4514086dc6
|
make_encoderonly: kwargs
|
2023-01-20 18:39:35 +00:00 |
|
|
35dbd3f8bc
|
ds eo: scale up rainfall data
It's taken most fo the afternoon to spot this one 🤦
|
2023-01-20 18:37:08 +00:00 |
|
|
5b54ceec48
|
ds eo: debug
|
2023-01-20 18:36:14 +00:00 |
|
|
a3787f0647
|
debug
|
2023-01-20 18:34:56 +00:00 |
|