I've a python script which works just as it should, but I need to write the execution time. I've googled that I should use
timeit but I can't seem to get it to work.
My Python script looks like this:
import sys import getopt import timeit import random import os import re import ibm_db import time from string import maketrans myfile = open("results_update.txt", "a") for r in range(100): rannumber = random.randint(0, 100) update = "update TABLE set val = %i where MyCount >= '2010' and MyCount < '2012' and number = '250'" % rannumber #print rannumber conn = ibm_db.pconnect("dsn=myDB","usrname","secretPWD") for r in range(5): print "Run %s\n" % r ibm_db.execute(query_stmt) query_stmt = ibm_db.prepare(conn, update) myfile.close() ibm_db.close(conn)
What I need is the time it takes to execute the query and write it to the file
results_update.txt. The purpose is to test an update statement for my database with different indexes and tuning mechanisms.
import time t0 = time.time() code_block t1 = time.time() total = t1-t0
This method is not as exact as
timeit (it does not average several runs) but it is straightforward.
time.time() (in Windows and Linux) and
time.clock() (in Linux) are not precise enough for fast functions (you get total = 0). In this case or if you want to average the time elapsed by several runs, you have to manually call the function multiple times (As I think you already do in you example code and timeit does automatically when you set its number argument)
import time def myfast(): code n = 10000 t0 = time.time() for i in range(n): myfast() t1 = time.time() total_n = t1-t0
In Windows, as Corey stated in the comment,
time.clock() has much higher precision (microsecond instead of second) and is preferred over
If you are profiling your code and can use IPython, it has the magic function
%%timeit operates on cells.
In : %timeit cos(3.14) 10000000 loops, best of 3: 160 ns per loop In : %%timeit ...: cos(3.14) ...: x = 2 + 3 ...: 10000000 loops, best of 3: 196 ns per loop