Skip to content
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
67e41ef
Fixed #367
dimitri-yatsenko Oct 12, 2017
f89bf0e
Merge branch 'master' of https://github.com/datajoint/datajoint-python
dimitri-yatsenko Oct 12, 2017
159cb51
fixed dependencies
dimitri-yatsenko Oct 12, 2017
bb88117
implemented the Union operator
dimitri-yatsenko Oct 13, 2017
9703126
bugfix in populate
dimitri-yatsenko Oct 13, 2017
4fa1abd
fixed a bug in server-side insert with missing attributes
dimitri-yatsenko Oct 13, 2017
d6b9f2a
work on cascading delete
dimitri-yatsenko Oct 19, 2017
c35ed7a
Merge branch 'master' of https://github.com/dimitri-yatsenko/datajoin…
dimitri-yatsenko Oct 19, 2017
7299058
fixed #375 -- added the max_calls argument to populate
dimitri-yatsenko Oct 19, 2017
3baf487
simplified insert from select
dimitri-yatsenko Oct 20, 2017
4969c36
consolidating the use of hashes
dimitri-yatsenko Oct 20, 2017
ce774be
added module `external` for handling external storage
dimitri-yatsenko Oct 20, 2017
ad650b0
minor fix in external, work in progress
dimitri-yatsenko Oct 20, 2017
860d04b
minor
dimitri-yatsenko Oct 20, 2017
2144b0f
changed how external storage is configured
dimitri-yatsenko Oct 20, 2017
cfc1e21
minor bug
dimitri-yatsenko Oct 20, 2017
0195cfc
minor formatting
dimitri-yatsenko Oct 20, 2017
d821bcf
removed the JobManager class -- it was never used
dimitri-yatsenko Oct 20, 2017
183b324
bugfix from previous commit
dimitri-yatsenko Oct 20, 2017
6164eb5
renamed the display_progress argument in populate()
dimitri-yatsenko Oct 20, 2017
96f0d20
Merge branch 'master' of https://github.com/dimitri-yatsenko/datajoin…
dimitri-yatsenko Oct 20, 2017
b37175b
rolled back unintended changes in delete
dimitri-yatsenko Oct 20, 2017
9688e5d
fixed typo JobsTable -> JobTable
dimitri-yatsenko Oct 20, 2017
bcc0048
fixed bugs introduced in recent commits
dimitri-yatsenko Oct 20, 2017
28208c2
Merge branch 'master' of https://github.com/datajoint/datajoint-python
dimitri-yatsenko Oct 20, 2017
04b1829
fixed test_requirements
dimitri-yatsenko Oct 20, 2017
1c8e071
fixed indentation
dimitri-yatsenko Oct 20, 2017
bb11c1b
minor
dimitri-yatsenko Oct 20, 2017
9fc36b4
minor
dimitri-yatsenko Oct 20, 2017
9fca90f
minor code refactor in schema.py
dimitri-yatsenko Oct 22, 2017
7069dad
improved the error message in autopopulate
dimitri-yatsenko Oct 22, 2017
3bc28c5
correction to previous commit
dimitri-yatsenko Oct 22, 2017
a3d1a53
Merge branch 'master' of https://github.com/datajoint/datajoint-python
dimitri-yatsenko Oct 24, 2017
d9cca7b
Merge branch 'master' of https://github.com/dimitri-yatsenko/datajoin…
dimitri-yatsenko Oct 24, 2017
1b37ee9
typo
dimitri-yatsenko Oct 24, 2017
cf3cf0c
made `make` and acceptable name for the populate callback (issue #387)
dimitri-yatsenko Oct 25, 2017
85b6587
small bugfix for rare cases with multiple inheritance
dimitri-yatsenko Oct 27, 2017
f936073
minor fix
dimitri-yatsenko Oct 27, 2017
2b622c1
Merge branch 'master' of https://github.com/dimitri-yatsenko/datajoin…
dimitri-yatsenko Oct 27, 2017
5270d80
minor fixes
dimitri-yatsenko Oct 29, 2017
85502b0
Merge branch 'master' of https://github.com/dimitri-yatsenko/datajoin…
dimitri-yatsenko Oct 30, 2017
bdba20d
undid an unintended change in delete
dimitri-yatsenko Oct 30, 2017
919efba
implemented declaration of external fields
dimitri-yatsenko Oct 30, 2017
e30dfb2
added tests for external storage
dimitri-yatsenko Oct 30, 2017
259950d
minor cleanup
dimitri-yatsenko Oct 30, 2017
3a7f416
minor cleanup
dimitri-yatsenko Oct 30, 2017
3594c67
added external storage tests
dimitri-yatsenko Nov 5, 2017
7eb114c
Merge branch 'master' of https://github.com/datajoint/datajoint-python
dimitri-yatsenko Nov 5, 2017
4d1af79
Completed basic implementation of external storage.
dimitri-yatsenko Nov 5, 2017
4a58a9a
ERD does not show dependencies on external storage
dimitri-yatsenko Nov 5, 2017
1ca0b09
again, the ERD no longer includes references to ~external
dimitri-yatsenko Nov 5, 2017
add950f
fixed #328: the jobs table now records the error stack
dimitri-yatsenko Nov 5, 2017
90c021e
fixes for #328
dimitri-yatsenko Nov 5, 2017
782a9a5
fixed #388 -- a more elegant way to skip duplicates in insert
dimitri-yatsenko Nov 5, 2017
383595d
followup to previous commit
dimitri-yatsenko Nov 5, 2017
794dc47
made insert from query more consistent with insert from variables
dimitri-yatsenko Nov 5, 2017
5ab3381
fixed issue #381 -- better error messages for syntax errors in declar…
dimitri-yatsenko Nov 5, 2017
4b2671e
typo from previous commit
dimitri-yatsenko Nov 6, 2017
9a0b902
set the strict mode at connection time
dimitri-yatsenko Nov 6, 2017
7302571
set sql_mode in connection
dimitri-yatsenko Nov 6, 2017
569881a
updated the sql_mode
dimitri-yatsenko Nov 6, 2017
86c2480
added tests for union and for external storage. Other minor fixes bas…
dimitri-yatsenko Nov 13, 2017
6f7c6bd
improved documentation and error messages for fetch and fetch1. Fixe…
dimitri-yatsenko Nov 14, 2017
8018a9d
added tests for external storage
dimitri-yatsenko Nov 15, 2017
7627569
changed the shape of the computed nodes in the ERD to elipse to avoid…
dimitri-yatsenko Nov 15, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,4 @@ dj_local_conf.json
build/
.coverage
./tests/.coverage
*.log
13 changes: 10 additions & 3 deletions datajoint/autopopulate.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import datetime
import random
from tqdm import tqdm
from itertools import count
from pymysql import OperationalError
from .relational_operand import RelationalOperand, AndList, U
from . import DataJointError
Expand Down Expand Up @@ -64,7 +65,8 @@ def _job_key(self, key):
"""
return key

def populate(self, *restrictions, suppress_errors=False, reserve_jobs=False, order="original", limit=None, display_progress=False):
def populate(self, *restrictions, suppress_errors=False, reserve_jobs=False,
order="original", limit=None, max_calls=None, display_progress=False):
"""
rel.populate() calls rel._make_tuples(key) for every primary key in self.key_source
for which there is not already a tuple in rel.
Expand All @@ -73,8 +75,9 @@ def populate(self, *restrictions, suppress_errors=False, reserve_jobs=False, ord
:param suppress_errors: suppresses error if true
:param reserve_jobs: if true, reserves job to populate in asynchronous fashion
:param order: "original"|"reverse"|"random" - the order of execution
:param limit: if not None, populates at max that many keys
:param display_progress: if True, report progress_bar
:param limit: if not None, checks at most that many keys
:param max_calls: if not None, populates at max that many keys
"""
if self.connection.in_transaction:
raise DataJointError('Populate cannot be called during a transaction.')
Expand Down Expand Up @@ -107,8 +110,12 @@ def handler(signum, frame):
elif order == "random":
random.shuffle(keys)

call_count = count()
logger.info('Found %d keys to populate' % len(keys))

for key in (tqdm(keys) if display_progress else keys):
if max_calls is not None and call_count >= max_calls:
break
if not reserve_jobs or jobs.reserve(self.target.table_name, self._job_key(key)):
self.connection.start_transaction()
if key in self.target: # already populated
Expand All @@ -117,6 +124,7 @@ def handler(signum, frame):
jobs.complete(self.target.table_name, self._job_key(key))
else:
logger.info('Populating: ' + str(key))
next(call_count)
try:
self._make_tuples(dict(key))
except (KeyboardInterrupt, SystemExit, Exception) as error:
Expand All @@ -142,7 +150,6 @@ def handler(signum, frame):
# place back the original signal handler
if reserve_jobs:
signal.signal(signal.SIGTERM, old_handler)

return error_list

def progress(self, *restrictions, display=True):
Expand Down
103 changes: 43 additions & 60 deletions datajoint/base_relation.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,32 +137,24 @@ def insert(self, rows, replace=False, ignore_errors=False, skip_duplicates=False
>>> dict(subject_id=8, species="mouse", date_of_birth="2014-09-02")])
"""

heading = self.heading
if isinstance(rows, RelationalOperand):
# INSERT FROM SELECT - build alternate field-narrowing query (only) when needed
if ignore_extra_fields and not all(name in self.heading for name in rows.heading):
query = 'INSERT{ignore} INTO {table} ({fields}) SELECT {fields} FROM ({select}) as `__alias`'.format(
ignore=" IGNORE" if ignore_errors or skip_duplicates else "",
table=self.full_table_name,
fields='`'+'`,`'.join(name for name in self.heading if name in rows.heading) + '`',
select=rows.make_sql())
else:
query = 'INSERT{ignore} INTO {table} ({fields}) {select}'.format(
ignore=" IGNORE" if ignore_errors or skip_duplicates else "",
table=self.full_table_name,
fields='`'+'`,`'.join(rows.heading.names)+'`',
select=rows.make_sql())
try:
self.connection.query(query)
except pymysql.err.InternalError as err:
if err.args[0] == server_error_codes['unknown column']:
print(query)
# args[1] -> Unknown column 'extra' in 'field list'
raise DataJointError('%s : To ignore extra fields, set ignore_extra_fields=True in insert.' % err.args[1])
else:
raise
return
# insert from select
if not ignore_extra_fields:
try:
raise DataJointError("Attribute %s not found.",
next(name for name in rows.heading if name not in heading))
except StopIteration:
pass
fields='`'+'`,`'.join(name for name in heading if name in rows.heading) + '`'
query = 'INSERT{ignore} INTO {table} ({fields}) {select}'.format(
ignore=" IGNORE" if ignore_errors or skip_duplicates else "",
fields=fields,
table=self.full_table_name,
select=rows.make_sql(select_fields=fields))
self.connection.query(query)
return

heading = self.heading
if heading.attributes is None:
logger.warning('Could not access table {table}'.format(table=self.full_table_name))
return
Expand Down Expand Up @@ -278,45 +270,33 @@ def delete(self):
Deletes the contents of the table and its dependent tables, recursively.
User is prompted for confirmation if config['safemode'] is set to True.
"""

# fill out the delete list in topological order
graph = self.connection.dependencies
graph.load()
delete_list = collections.OrderedDict()
for table in graph.descendants(self.full_table_name):
if not table.isdigit():
delete_list[table] = FreeRelation(self.connection, table)
else:
parent, edge = next(iter(graph.parents(table).items()))
delete_list[table] = FreeRelation(self.connection, parent).proj(
**{new_name: old_name
for new_name, old_name in zip(edge['referencing_attributes'], edge['referenced_attributes'])
if new_name != old_name})

# construct restrictions for each relation
restrict_by_me = set()
restrictions = collections.defaultdict(list)
# restrict by self
if self.restrictions:
restrict_by_me.add(self.full_table_name)
restrictions[self.full_table_name].append(self.restrictions.simplify()) # copy own restrictions
# restrict by renamed nodes
restrict_by_me.update(table for table in delete_list if table.isdigit()) # restrict by all renamed nodes
# restrict by tables restricted by a non-primary semijoin
for table in delete_list:
restrict_by_me.update(graph.children(table, primary=False)) # restrict by any non-primary dependents

# compile restriction lists
for table, rel in delete_list.items():
for dep in graph.children(table):
if table in restrict_by_me:
restrictions[dep].append(rel) # if restrict by me, then restrict by the entire relation
delete_list = collections.OrderedDict(
(table, None if table.isdigit() else FreeRelation(self.connection, table))
for table in graph.descendants(self.full_table_name))
for rel in delete_list.values():
rel.restrict(False) # initially prohibit all
# apply restrictions
delete_list[self.full_table_name].set(self.restrictions)
for name, rel in delete_list.items():
all_children = graph.children(name)
semi = set(all_children)
if not name.isdigit() and not (name == self.full_table_name and self.restrictions):
semi.difference_update(graph.children(name, primary=True))
for child in semi:
if not child.isdigit():
delete_list[child].allow(rel)
else:
restrictions[dep].extend(restrictions[table]) # or re-apply the same restrictions
# allow aliased
for child, props in graph.children(child).items():
delete_list[child].allow(rel.proj(
**dict(zip(props['referencing_attributes'], props['referenced_attributes']))))
for child in set(all_children).difference(semi):
delete_list[child].allow(rel.restrictions)

# apply restrictions
for name, r in delete_list.items():
if restrictions[name]: # do not restrict by an empty list
r.restrict([r.proj() if isinstance(r, RelationalOperand) else r
for r in restrictions[name]])
# execute
do_delete = False # indicate if there is anything to delete
if config['safemode']: # pragma: no cover
Expand All @@ -342,7 +322,10 @@ def delete(self):
if not already_in_transaction:
self.connection.start_transaction()
for r in reversed(list(delete_list.values())):
r.delete_quick()
try:
r.delete_quick()
except Exception as e:
print(e)
if not already_in_transaction:
self.connection.commit_transaction()
print('Done')
Expand Down
2 changes: 0 additions & 2 deletions datajoint/connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
from . import config
from . import DataJointError
from .dependencies import Dependencies
from .jobs import JobManager
from pymysql import err

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -75,7 +74,6 @@ def __init__(self, host, user, password, init_fun=None):
raise DataJointError('Connection failed.')
self._conn.autocommit(True)
self._in_transaction = False
self.jobs = JobManager(self)
self.schemas = dict()
self.dependencies = Dependencies(self)

Expand Down
116 changes: 116 additions & 0 deletions datajoint/external.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import os
import pymysql
from . import config, DataJointError
from .hash import long_hash
from .blob import pack, unpack
from .base_relation import BaseRelation


class ExternalTable(BaseRelation):
"""
The table tracking externally stored objects
"""
def __init__(self, arg, database=None):
if isinstance(arg, ExternalTable):
super().__init__(arg)
# copy constructor
self.database = arg.database
self._connection = arg._connection
self._definition = arg._definition
self._user = arg._user
return
super().__init__()
self.database = database
self._connection = arg
if not self.is_declared:
self.declare()

@property
def definition(self):
return """
# external storage tracking
store :char(8) # the name of external store
hash :char(43) # the hash of stored object
---
count = 1 :int # reference count
size :bigint unsigned # size of object in bytes
timestamp=CURRENT_TIMESTAMP :timestamp # automatic timestamp
"""

@property
def table_name(self):
return '~external'

def put(self, store, obj):
"""
put an object in external store
"""
# serialize object
blob = pack(obj)
hash = long_hash(blob)

# write object
try:
spec = config['external.%s' % store]
except KeyError:
raise DataJointError('storage.%s is not configured' % store)

if spec['protocol'] == 'file':
folder = os.path.join(spec['location'], self.database)
full_path = os.path.join(folder, hash)
if not os.path.isfile(full_path):
try:
with open(full_path, 'wb') as f:
f.write(blob)
except FileNotFoundError:
os.makedirs(folder)
with open(full_path, 'wb') as f:
f.write(blob)
else:
raise DataJointError('Unknown external storage %s' % store)

# insert tracking info
query = """INSERT INTO `{db}`.`{table}` (store, hash, size) VALUES ({store}, {hash}, {size})
ON DUPLICATE KEY count=count+1, timestamp=CURRENT_TIMESTAMP""".format(
db=self.database,
table=self.table_name,
store=store,
hash=hash,
size=len(blob))
self.connection.

return hash


def get(self, store, hash):
"""
get an object from external store
"""
try:
spec = config['external.%s' % store]
except KeyError:
raise DataJointError('storage.%s is not configured' % store)

if spec['protocol'] == 'file':
full_path = os.path.join(spec['location'], self.database, hash)
try:
with open(full_path, 'rb') as f:
blob = f.read()
except FileNotFoundError:
raise DataJointError('Lost external blob')
else:
raise DataJointError('Unknown external storage %s' % store)

return unpack(blob)


def remove(self, store, hash)
"""
delete an object from external store
"""
# decrement count
query = "UPDATE `{db}`.`{table}` count=count-1 WHERE store={store} and hash={hash}".format(
db=self.database,
table=self.table_name,
store=store,
hash=hash)
30 changes: 19 additions & 11 deletions datajoint/hash.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,15 @@
import hashlib
import base64

def key_hash(key):
"""
32-byte hash used for lookup of primary keys of jobs
"""
hashed = hashlib.md5()
for k, v in sorted(key.items()):
hashed.update(str(v).encode())
return hashed.hexdigest()


def to_ascii(byte_string):
"""
Expand All @@ -10,25 +19,24 @@ def to_ascii(byte_string):
return base64.b64encode(byte_string, b'-_').decode()


def long_hash(buffer):
def long_hash(*buffers):
"""
:param buffer: a binary buffer (e.g. serialized blob)
:return: 43-character base64 ASCII rendition SHA-256
"""
return to_ascii(hashlib.sha256(buffer).digest())[0:43]
hashed = hashlib.sha256()
for buffer in buffers:
hashed.update(buffer)
return to_ascii(hashed.digest())[0:43]


def short_hash(buffer):
def short_hash(*buffers):
"""
:param buffer: a binary buffer (e.g. serialized blob)
:return: the first 8 characters of base64 ASCII rendition SHA-1
"""
return to_ascii(hashlib.sha1(buffer).digest())[:8]

hashed = hashlib.sha1()
for buffer in buffers:
hashed.update(buffer)
return to_ascii(hashed.digest())[:8]

# def filehash(filename):
# s = hashlib.sha256()
# with open(filename, 'rb') as f:
# for block in iter(lambda: f.read(65536), b''):
# s.update(block)
# return to_ascii(s.digest())
Loading