Skip to content

Refactored Eth_Mnist #273

Closed
coopersigrist wants to merge 2 commits intoBindsNET:masterfrom
coopersigrist:master
Closed

Refactored Eth_Mnist #273
coopersigrist wants to merge 2 commits intoBindsNET:masterfrom
coopersigrist:master

Conversation

@coopersigrist
Copy link
Copy Markdown
Collaborator

Set default values of None to torchvision wrapper encoders too

image_encoder: Optional[Encoder],
label_encoder: Optional[Encoder],
image_encoder: Optional[Encoder] = None,
label_encoder: Optional[Encoder] = None,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this can be None as the order of the arguments matter in this case. The point is to split the requirements. First we take care of Bindsnet requirements image_encoder and label_encoder then we take care of specific dataset requirements *args and **kwargs. I think forcing these (even if somebody puts None) keeps the consistency. @djsaunde thoughts?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makes sense. @coopersigrist can you remove this?

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does make sense, the requirements need to be split.
But, the default of the encoders need to be None, if the specific dataset require different encoder it will be specified later on in *args and **kwargs

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there a way to seperate them while keeping a default value? I agree that it's not necessary at all to make this change so if the preferred style is no default values I think we should definitely change it back.

Copy link
Copy Markdown
Collaborator

@djsaunde djsaunde Jun 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Hananel-Hazan The problem is not that the default Encoder should be None (it should be, and still is!), it's that if no arguments are specified for image_encoder and label_encoder, then the TorchvisionDatasetWrapper constructor won't know which argument to pass for the required root argument of the dataset class it wraps.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@k-chaney What about the following?

def __init__(
            self,
            *args,
            image_encoder: Optional[Encoder] = None,
            label_encoder: Optional[Encoder] = None,
            **kwargs
        )

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something important to remember is that this is a wrapper only for the datasets provided by torchvision. There won't be any other formats and one shouldn't arbitrarily use this wrapper on other datasets. Second, if a default is provided for image_encoder and label_encoder without moving them to the keyword section of the function signature you'll see the following. In this toy example I mirror what is seen in the proposed argument structure:

In [1]: def foo(a=None, *args, **kwargs):
   ...:     print(a)
   ...:     print(args)
   ...:     print(kwargs)
   ...:     

Most of the time people would provide an argument for a and you would see (this is ideal case):

In [2]: foo('bindsnet_arg', 'torch_arg1', 'torch_arg2', torch='arg3')
bindsnet_arg
('torch_arg1', 'torch_arg2')
{'torch': 'arg3'}

But some people would skip providing an argument for a and that would result in the following:

In [3]: foo('torch_arg1', 'torch_arg2', torch='arg3')
torch_arg1
('torch_arg2',)
{'torch': 'arg3'}

Even further some people would provide a as a keyword argument later on:

In [4]: foo('torch_arg', 'torch_arg', torch='arg', a='bindsnet_arg')
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-11-a49618c19c9b> in <module>()
----> 1 foo('torch_arg', 'torch_arg', torch='arg', a='bindsnet_arg')

TypeError: foo() got multiple values for keyword argument 'a'

sample = dataset[i]
images.append(sample["image"])
labels.append(sample["label"])

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how I feel about this refactor. It would be difficult for somebody to take this and apply it to a larger dataset because of the extraction of this data ahead of time. I'll leave the final call on this to somebody else though.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this doesn't make sense. @coopersigrist You should pass a PoissonEncoder to the MNIST constructor, and then just iterate over it during training.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed! I'm gonna start redoing this now. I'll close this request and make the changes

sample = dataset[i]
images.append(sample["image"])
labels.append(sample["label"])

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this doesn't make sense. @coopersigrist You should pass a PoissonEncoder to the MNIST constructor, and then just iterate over it during training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants