|
57b8eb93fb
|
fixup
|
2022-11-28 19:09:35 +00:00 |
|
|
6640a41bb7
|
almost got it....? it's not what I expected....!
|
2022-11-28 19:08:50 +00:00 |
|
|
f48473b703
|
fixup
|
2022-11-28 19:00:11 +00:00 |
|
|
f6feb125e3
|
this iss ome serious debugging.
This commit will produce an extremely large volume of output.
|
2022-11-28 18:57:41 +00:00 |
|
|
09f81b0746
|
train_mono: debug
this commit will generate a large amount of debug output.
|
2022-11-28 16:46:17 +00:00 |
|
|
f39e4ade70
|
LayerConvNextGamma: fix config serialisation bug
.....this is unlikely to be the problem as this bug is in an unused code path.
|
2022-11-25 21:16:31 +00:00 |
|
|
3a0356929c
|
mono: drop the sparse
|
2022-11-22 16:20:56 +00:00 |
|
|
527b34942d
|
convnext_inverse: kernel_size 4→2
|
2022-11-11 19:29:37 +00:00 |
|
|
0662d0854b
|
model_mono: fix bottleneck
|
2022-11-11 19:11:40 +00:00 |
|
|
73acda6d9a
|
fix debug logging
|
2022-11-11 19:08:38 +00:00 |
|
|
9da059d738
|
model shape logging
|
2022-11-11 19:03:37 +00:00 |
|
|
54ae88b1b4
|
in this entire blasted project I have yet to get the rotation of anything correct....!
|
2022-11-11 18:58:45 +00:00 |
|
|
a7a475dcd1
|
debug 2
|
2022-11-11 18:38:07 +00:00 |
|
|
bf2f6e9b64
|
debug logging
it begins again
|
2022-11-11 18:31:40 +00:00 |
|
|
9035450213
|
mono: instantiate right model
|
2022-11-11 18:28:29 +00:00 |
|
|
3313f77c88
|
Add (untested) mono rainfall → water depth model
* sighs *
Unfortunately I can't seem to get contrastive learning to work.....
|
2022-11-10 22:36:11 +00:00 |
|
|
9384b89165
|
model_segmentation: spare → normal crossentropy, activation functions at end
|
2022-11-10 20:53:37 +00:00 |
|
|
b6676e7361
|
switch from sparse to normal crossentropy
|
2022-11-10 20:50:56 +00:00 |
|
|
44ad51f483
|
CallbackNBatchCsv: bugfix .sort() → sorted()
|
2022-11-04 16:40:21 +00:00 |
|
|
1375201c5f
|
CallbackNBatchCsv: open_handle mode
|
2022-11-03 18:29:00 +00:00 |
|
|
f2ae74ce7b
|
how could I be so stupid..... round 2
|
2022-11-02 17:38:26 +00:00 |
|
|
5f8d6dc6ea
|
Add metrics every 64 batches
this is important, because with large batches it can be difficult to tell what's happening inside each epoch.
|
2022-10-31 19:26:10 +00:00 |
|
|
cf872ef739
|
how could I be so *stupid*......
|
2022-10-31 18:40:58 +00:00 |
|
|
da32d75778
|
make_callbacks: display steps, not samples
|
2022-10-31 18:36:28 +00:00 |
|
|
172cf9d8ce
|
tweak
|
2022-10-31 18:19:43 +00:00 |
|
|
dbe35ee943
|
loss: comment l2 norm
|
2022-10-31 18:09:03 +00:00 |
|
|
5e60319024
|
fixup
|
2022-10-31 17:56:49 +00:00 |
|
|
b986b069e2
|
debug party time
|
2022-10-31 17:50:29 +00:00 |
|
|
458faa96d2
|
loss: fixup
|
2022-10-31 17:18:21 +00:00 |
|
|
55dc05e8ce
|
contrastive: comment weights that aren't needed
|
2022-10-31 16:26:48 +00:00 |
|
|
1b489518d0
|
segmenter: add LayerStack2Image to custom_objects
|
2022-10-26 17:05:50 +01:00 |
|
|
48ae8a5c20
|
LossContrastive: normalise features as per the paper
|
2022-10-26 16:52:56 +01:00 |
|
|
843cc8dc7b
|
contrastive: rewrite the loss function.
The CLIP paper *does* kinda make sense I think
|
2022-10-26 16:45:45 +01:00 |
|
|
fad1399c2d
|
convnext: whitespace
|
2022-10-26 16:45:20 +01:00 |
|
|
1d872cb962
|
contrastive: fix initial temperature value
It should be 1/0.07, but we had it set to 0.07......
|
2022-10-26 16:45:01 +01:00 |
|
|
f994d449f1
|
Layer2Image: fix
|
2022-10-25 21:32:17 +01:00 |
|
|
6a29105f56
|
model_segmentation: stack not reshape
|
2022-10-25 21:25:15 +01:00 |
|
|
98417a3e06
|
prepare for NCE loss
.....but Tensorflow's implementation looks to be for supervised models :-(
|
2022-10-25 21:15:05 +01:00 |
|
|
bb0679a509
|
model_segmentation: don't softmax twice
|
2022-10-25 21:11:48 +01:00 |
|
|
f2e2ca1484
|
model_contrastive: make water encoder significantly shallower
|
2022-10-24 20:52:31 +01:00 |
|
|
8195318a42
|
SparseCategoricalAccuracy: losses → metrics
|
2022-10-21 16:51:20 +01:00 |
|
|
c98d8d05dd
|
segmentation: use the right accuracy
|
2022-10-21 16:17:05 +01:00 |
|
|
3f7db6fa78
|
fix embedding confusion
|
2022-10-21 15:15:59 +01:00 |
|
|
b3ea189d37
|
segmentation: softmax the output
|
2022-10-13 21:02:57 +01:00 |
|
|
f121bfb981
|
fixup summaryfile
|
2022-10-13 17:54:42 +01:00 |
|
|
5c35c0cee4
|
model_segmentation: document; remove unused args
|
2022-10-13 17:50:16 +01:00 |
|
|
f12e6ab905
|
No need for a CLI arg for feature_dim_in - metadata should contain this
|
2022-10-13 17:37:16 +01:00 |
|
|
6423bf6702
|
LayerConvNeXtGamma: avoid adding an EagerTensor to config
Very weird how this is a problem when it wasn't before..
|
2022-10-12 17:12:07 +01:00 |
|
|
32f5200d3b
|
pass model_arch properly
|
2022-10-12 16:50:06 +01:00 |
|
|
c45b90764e
|
segmentation: adds xxtiny, but unsure if it's small enough
|
2022-10-11 19:22:37 +01:00 |
|