Skip to content

Commit 489102b

Browse files
committed
update the vision for the 0.6rc1 release.
1 parent 538aedb commit 489102b

File tree

1 file changed

+8
-9
lines changed

1 file changed

+8
-9
lines changed

doc/introduction.txt

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -165,11 +165,11 @@ Note: There is no short term plan to support multi-node computation.
165165
Theano Vision State
166166
===================
167167

168-
Here is the state of that vision as of 24 October 2011 (after Theano release
169-
0.4.1):
168+
Here is the state of that vision as of 1 October 2012 (after Theano release
169+
0.6rc1):
170170

171171
* We support tensors using the `numpy.ndarray` object and we support many operations on them.
172-
* We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them (more are coming).
172+
* We support sparse types by using the `scipy.{csc,csr}_matrix` object and support some operations on them.
173173
* We have started implementing/wrapping more advanced linear algebra operations.
174174
* We have many graph transformations that cover the 4 categories listed above.
175175
* We can improve the graph transformation with better storage optimization
@@ -196,16 +196,15 @@ Here is the state of that vision as of 24 October 2011 (after Theano release
196196
* The profiler used by cvm is less complete than `ProfileMode`.
197197

198198
* SIMD parallelism on the CPU comes from the compiler.
199-
* Multi-core parallelism is only supported for gemv and gemm, and only
200-
if the external BLAS implementation supports it.
199+
* Multi-core parallelism is only supported Conv2d. If the external BLAS implementation supports it,
200+
there is also, gemm, gemv and ger that are parallelized.
201201
* No multi-node support.
202202
* Many, but not all NumPy functions/aliases are implemented.
203203
* http://www.assembla.com/spaces/theano/tickets/781
204-
* Wrapping an existing Python function in easy, but better documentation of
205-
it would make it even easier.
206-
* We need to find a way to separate the shared variable memory
204+
* Wrapping an existing Python function in easy and documented.
205+
* We know how to separate the shared variable memory
207206
storage location from its object type (tensor, sparse, dtype, broadcast
208-
flags).
207+
flags), but we need to do it.
209208

210209

211210
Contact us

0 commit comments

Comments
 (0)