Skip to content

Conversation

@DelightRun
Copy link
Contributor

Since time.clock() is not accurate, we'd better use time.time() instead.

@abergeron
Copy link
Contributor

Where have you heard that time.clock() is not accurate?

@CallMeK
Copy link

CallMeK commented Jun 29, 2015

Sorry for the last comment, I mixed some information.

time.clock has different implementation on linux and windows. A lot of discussions can be easily found on stackoverflow. And time.clock may give strange result when using BLAS though something like numpy.dot, as found on my macbook air. This may due to the multithread feature. It is safer to report time.time(), as wall time.

You can check out those discussions online. There are a lot of them.
On Jun 28, 2015 8:33 PM, "abergeron" [email protected] wrote:

Where have you heard that time.clock() is not accurate?


Reply to this email directly or view it on GitHub
#97 (comment)
.

@CallMeK
Copy link

CallMeK commented Jun 29, 2015

I just updated my comment. Please ignore the previous email.

On Sun, Jun 28, 2015 at 8:33 PM, abergeron [email protected] wrote:

Where have you heard that time.clock() is not accurate?


Reply to this email directly or view it on GitHub
#97 (comment)
.

Ke(Kevin) Wu
Chemistry and Chemical Biology
Computer Science
Rensselaer Exploratory Center for Cheminformatics Research (RECCR)
Rensselaer Polytechnic Institute
Tel:+15189515917
[email protected]

@DelightRun
Copy link
Contributor Author

I checked out this discussion on stackoverflow, it says time.clock() is not as accurate as time.time() on Unix system. And, the discussion says time.clock() can't give the correct result while using GPU.

However, I think timeit.default_timer will be a good solution because it will define a default timer in a platform-specific manner.

@DelightRun
Copy link
Contributor Author

I decide to close this pull request since using time.time() directly is not a good solution.

@DelightRun DelightRun closed this Jun 29, 2015
@abergeron
Copy link
Contributor

Just for the record, the discussion on stack overflow refers to timing very short snippets of code, which is not what we do here (the usual measured time is in minutes). It is important to consider the context in which the code is used and since we don't need anything more precise than seconds, it doesn't matter which one we use.

However since clock() is being deprecated in python 3.3, that would be a good argument to switch to using time().

@DelightRun
Copy link
Contributor Author

Thanks for your reply. I've closed this pull request.
Actually time.clock() would give a totally wrong result if we use GPU for computation. So that's also a reason why we should use time.time() instead.

BTW, could you please have a look at issues #95 about duplicate name problem I met when I try to save a SdA model

@nouiz
Copy link
Member

nouiz commented Jun 29, 2015

As python 3.3 deprecate time.clock(), I'll reopen this PR and merge it. Thanks to bring this to our intention.

@nouiz nouiz reopened this Jun 29, 2015
nouiz added a commit that referenced this pull request Jun 29, 2015
replace time.clock() by time.time()
@nouiz nouiz merged commit 21b530c into lisa-lab:master Jun 29, 2015
@DelightRun
Copy link
Contributor Author

My pleasure

taneishi pushed a commit to taneishi/DBN that referenced this pull request Nov 28, 2019
replace time.clock() by time.time()
taneishi pushed a commit to taneishi/DBN that referenced this pull request Feb 13, 2020
replace time.clock() by time.time()

Former-commit-id: c12e8f2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants