Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✅ Make embedding layer output Float32 #274

Closed
wants to merge 4 commits into from
Closed

Conversation

EssamWisam
Copy link
Collaborator

In response to issue #273, this casts the embedding layer's output to Float32 and makes a small test for it. @ablaom is this how you want this to be implemented? Likely, I can annotate the return type so that it rather casts implictly if it is what you prefer.

@ablaom
Copy link
Collaborator

ablaom commented Sep 15, 2024

Thanks for this. Looking at this again, I'm a little confused as to why it's even necessary. According to this line the embedding layer weights are initialised using glorot_uniform which ought (according to docstring) intitialize with Float32.

Okay, I think the problem is that the integers you provide as inputs to the embedding layers must be Float64, and we're getting Float32 * Float64 = Float64. So, perhaps the more correct solution is to ensure the integers levels are Float32, and not Float64, which would be fewer conversions than in the current PR. Does this sound right @EssamWisam?

@EssamWisam
Copy link
Collaborator Author

Well, for the code in this PR:

levels_per_feat: Int64
newdims: Int64
numfeats: Int64
Type of x: Float64

Thus, it must be that in the other PR the input batch x is Float64 or I am misunderstanding or missing something.

@ablaom
Copy link
Collaborator

ablaom commented Sep 16, 2024

In your embedding, you first convert a level, such as "male" or "female" to a float integer, such as 1.0, right? And then this is input to the embedding layer itself, no?

My question is, do you send 1.0 or do you send Float32(1.0) to the embedding layer? Or am I misunderstanding something?

@EssamWisam
Copy link
Collaborator Author

Oh I misunderstood what you were refering to as integers, you are talking about the ordinal encoder. It's quite easy to amke them Float32 upon generation but would you prefer that I do this here or to that, only, in the other PR to guarantee it has the intended effect?

@ablaom
Copy link
Collaborator

ablaom commented Sep 16, 2024

It's quite easy to amke them Float32 upon generation

Yes, this is my preference. Can you do it here, so I don't have to find the relevant code? I think if the current tests pass, then that ought to fix my issue at the other PR, but of course I'll check before merging this one.

@EssamWisam
Copy link
Collaborator Author

Could you check now?

@ablaom ablaom closed this in #276 Sep 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants