Commit graph

609 commits

Author SHA1 Message Date
a762664063
slurm-process: -n28 fo uniq call 2022-11-04 17:23:20 +00:00
0166b4d09e
slurm-process: change log file names 2022-11-04 17:11:10 +00:00
0353072d15
allow pretrain to run on gpu
we've slashed the size of the 2nd encoder, so ti should fit naow?
2022-11-04 17:02:07 +00:00
44ad51f483
CallbackNBatchCsv: bugfix .sort() → sorted() 2022-11-04 16:40:21 +00:00
4dddcfcb42
pretrain_predict: missing \n 2022-11-04 16:01:28 +00:00
1375201c5f
CallbackNBatchCsv: open_handle mode 2022-11-03 18:29:00 +00:00
3206d6b7e7
slurm: rename segmenter job name 2022-11-03 17:12:27 +00:00
f2ae74ce7b
how could I be so stupid..... round 2 2022-11-02 17:38:26 +00:00
441ad92b12
slurm: fixup 2022-11-01 19:57:15 +00:00
bc0e5f05a8
slurm: fixup 2022-11-01 19:55:04 +00:00
784b8ed35c
recordify: catch NaN --count-file 2022-11-01 19:53:21 +00:00
c17a4ca05a
slurm: fix sanity logic 2022-11-01 19:38:04 +00:00
79b231198f
slurm-process: check input files are readable 2022-11-01 19:03:37 +00:00
a69fa9f0f3
slurm: rename 2022-11-01 18:59:55 +00:00
f8341e7d89
slurm: add .log 2022-11-01 18:59:15 +00:00
fecc63b6a2
wrangler: write high-level job file 2022-11-01 18:56:27 +00:00
91152ebb1c
wrangler:recordify update cli help
we only output .jsonl.gz to a DIRECTORY, so update cli help to reflect this
2022-11-01 18:29:47 +00:00
5f8d6dc6ea
Add metrics every 64 batches
this is important, because with large batches it can be difficult to tell what's happening inside each epoch.
2022-10-31 19:26:10 +00:00
cf872ef739
how could I be so *stupid*...... 2022-10-31 18:40:58 +00:00
da32d75778
make_callbacks: display steps, not samples 2022-10-31 18:36:28 +00:00
dfef7db421
moar debugging 2022-10-31 18:26:34 +00:00
172cf9d8ce
tweak 2022-10-31 18:19:43 +00:00
dbe35ee943
loss: comment l2 norm 2022-10-31 18:09:03 +00:00
5e60319024
fixup 2022-10-31 17:56:49 +00:00
b986b069e2
debug party time 2022-10-31 17:50:29 +00:00
458faa96d2
loss: fixup 2022-10-31 17:18:21 +00:00
55dc05e8ce
contrastive: comment weights that aren't needed 2022-10-31 16:26:48 +00:00
33391eaf16
train_predict/jsonl: don't argmax
I'm interested inthe raw values
2022-10-26 17:21:19 +01:00
74f2cdb900
train_predict: .list() → .tolist() 2022-10-26 17:12:36 +01:00
4f9d543695
train_predict: don't pass model_code
it's redundant
2022-10-26 17:11:36 +01:00
1b489518d0
segmenter: add LayerStack2Image to custom_objects 2022-10-26 17:05:50 +01:00
48ae8a5c20
LossContrastive: normalise features as per the paper 2022-10-26 16:52:56 +01:00
843cc8dc7b
contrastive: rewrite the loss function.
The CLIP paper *does* kinda make sense I think
2022-10-26 16:45:45 +01:00
fad1399c2d
convnext: whitespace 2022-10-26 16:45:20 +01:00
1d872cb962
contrastive: fix initial temperature value
It should be 1/0.07, but we had it set to 0.07......
2022-10-26 16:45:01 +01:00
f994d449f1
Layer2Image: fix 2022-10-25 21:32:17 +01:00
6a29105f56
model_segmentation: stack not reshape 2022-10-25 21:25:15 +01:00
98417a3e06
prepare for NCE loss
.....but Tensorflow's implementation looks to be for supervised models :-(
2022-10-25 21:15:05 +01:00
bb0679a509
model_segmentation: don't softmax twice 2022-10-25 21:11:48 +01:00
f2e2ca1484
model_contrastive: make water encoder significantly shallower 2022-10-24 20:52:31 +01:00
a6b07a49cb
count water/nowater pixels in Jupyter Notebook 2022-10-24 18:05:34 +01:00
a8b101bdae
dataset_predict: add shape_water_desired 2022-10-24 18:05:13 +01:00
587c1dfafa
train_predict: revamp jsonl handling 2022-10-21 16:53:08 +01:00
8195318a42
SparseCategoricalAccuracy: losses → metrics 2022-10-21 16:51:20 +01:00
612735aaae
rename shuffle arg 2022-10-21 16:35:45 +01:00
c98d8d05dd
segmentation: use the right accuracy 2022-10-21 16:17:05 +01:00
bb0258f5cd
flip squeeze operator ordering 2022-10-21 15:38:57 +01:00
af26964c6a
batched_iterator: reset i_item after every time 2022-10-21 15:35:43 +01:00
c5b1501dba
train-predict fixup 2022-10-21 15:27:39 +01:00
42aea7a0cc
plt.close() fixup 2022-10-21 15:23:54 +01:00