Skip to content

Instantly share code, notes, and snippets.

@jalola
Last active September 23, 2018 13:06
Show Gist options
  • Select an option

  • Save jalola/43951c0f9c088435d112f3cf86f46088 to your computer and use it in GitHub Desktop.

Select an option

Save jalola/43951c0f9c088435d112f3cf86f46088 to your computer and use it in GitHub Desktop.

Revisions

  1. jalola revised this gist Sep 23, 2018. 1 changed file with 0 additions and 1 deletion.
    1 change: 0 additions & 1 deletion fastai_onnx_wrapup_model.py
    Original file line number Diff line number Diff line change
    @@ -2,7 +2,6 @@ class ImageScale(nn.Module):
    def __init__(self):
    super().__init__()
    self.denorminator = torch.full((3, sz, sz), 255.0, device=torch.device("cuda"))

    def forward(self, x): return torch.div(x, self.denorminator).unsqueeze(0)

    # We need to:
  2. jalola created this gist Sep 23, 2018.
    12 changes: 12 additions & 0 deletions fastai_onnx_wrapup_model.py
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,12 @@
    class ImageScale(nn.Module):
    def __init__(self):
    super().__init__()
    self.denorminator = torch.full((3, sz, sz), 255.0, device=torch.device("cuda"))

    def forward(self, x): return torch.div(x, self.denorminator).unsqueeze(0)

    # We need to:
    # - Add ImageScale by 255.0 at the front
    # - Replace LogSoftmax layer with Softmax at the end to get probability instead of loss/cost
    final_model = [ImageScale()] + (list(learn.model.children())[:-1] + [nn.Softmax()])
    final_model = nn.Sequential(*final_model)