You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Element-wise: each element multiplied by the corresponding element.
Arrays must be the same shape (or broadcastable).
Output shape = input shape.
a = [1, 2, 3, 4]
b = [10,20,30,40]
↓ ↓ ↓ ↓
[10, 40, 90, 160]
1. np.multiply() and * operator
# These are identical — * calls np.multiply under the hooda*b# → [10, 40, 90, 160]np.multiply(a, b) # → [10, 40, 90, 160]# 2D element-wiseA*B# → [[ 5, 12],# [21, 32]]np.multiply(A, B) # → same
3. Element-wise with broadcasting — different shapes
# (3,1) × (1,4) → (3,4) all combinationscol=np.array([[1], [2], [3]]) # shape (3, 1)row=np.array([[10, 20, 30, 40]]) # shape (1, 4)col*row# → [[10, 20, 30, 40],# [20, 40, 60, 80],# [30, 60, 90,120]]# Scale each row of A_rect by a different factorscales=np.array([1, 2]) # shape (2,) → broadcast over (2,3)A_rect*scales[:, np.newaxis]
# row 0 × 1, row 1 × 2# → [[1, 2, 3],# [8, 10, 12]]
4. In-place multiplication — *=
arr=np.array([1., 2., 3., 4.])
arr*=5print(arr) # → [ 5., 10., 15., 20.]# modifies array directly — no new array allocated
5. np.multiply.reduce() — product along axis
np.multiply.reduce(a) # → 24 (1×2×3×4) same as np.prod(a)np.multiply.reduce(A, axis=0) # → [3, 8] product down each columnnp.multiply.reduce(A, axis=1) # → [2, 12] product across each row
PART 2 — MATRIX MULTIPLICATION
What it is
Matrix multiplication: rows of left × columns of right, summed.
Shape rule: (m, k) @ (k, n) → (m, n)
↑ ↑
inner dims must match
A (2×2) @ B (2×2) → C (2×2)
C[0,0] = A[0,:] · B[:,0] = 1×5 + 2×7 = 19
C[0,1] = A[0,:] · B[:,1] = 1×6 + 2×8 = 22
C[1,0] = A[1,:] · B[:,0] = 3×5 + 4×7 = 43
C[1,1] = A[1,:] · B[:,1] = 3×6 + 4×8 = 50
np.matmul(A, B) # → [[19, 22], [43, 50]] same as A @ Bnp.matmul(A_rect, B_rect) # → [[ 58, 64], [139, 154]]# matmul does NOT accept scalars — use np.dot or * for thatnp.matmul(A, 2) # → ValueErrorA*2# → correct for scalar multiplication
8. np.dot() — works for 1D, 2D, and scalars
# 1D: dot product → scalarnp.dot(a, b) # → 300 (same as a @ b)# 2D: matrix multiplicationnp.dot(A, B) # → [[19, 22], [43, 50]] same as A @ B# Scalar: element-wise scalenp.dot(A, 3) # → [[ 3, 6], [ 9, 12]]# Mixed 1D × 2Dnp.dot(np.array([1, 2]), A) # → [7, 10] (1×2) @ (2×2) = (1×2)
# Stack of matrices — shape (batch, m, k) @ (batch, k, n) → (batch, m, n)batch_A=np.random.default_rng(42).random((5, 2, 3)) # 5 matrices of (2,3)batch_B=np.random.default_rng(0).random((5, 3, 4)) # 5 matrices of (3,4)result=batch_A @ batch_B# → shape (5, 2, 4)# matmul applies independently to each of the 5 matrix pairs
PART 3 — RELATED OPERATIONS
11. np.outer() — outer product of two 1D vectors
# outer product: every element of a combined with every element of bx=np.array([1, 2, 3])
y=np.array([4, 5, 6])
np.outer(x, y)
# → [[ 4, 5, 6],# [ 8, 10, 12],# [12, 15, 18]]# shape: (len(x), len(y))# Same result using broadcastingx[:, np.newaxis] *y# → same (3,3) array
12. np.inner() — inner product
# 1D: same as dot productnp.inner(a, b) # → 300# 2D: inner product of last axesnp.inner(A, B)
# → A[i,:] · B[j,:] for all i,j# → [[17, 23],# [39, 53]]# Note: NOT the same as A @ B
13. np.tensordot() — generalized dot product over specified axes
# axis=-1 default — like dot product for last axisnp.tensordot(A, B, axes=1) # → [[19, 22], [43, 50]] same as A @ B# Sum over specific axesnp.tensordot(A, B, axes=([1], [0])) # → same as A @ B# axes=0 — outer productnp.tensordot(a[:3], a[:3], axes=0)
# → [[1, 2, 3],# [2, 4, 6],# [3, 6, 9]]
14. Element-wise square — np.square() vs matrix square
# Element-wise square — each element squarednp.square(A) # → [[ 1, 4],# [ 9, 16]]A*A# → sameA**2# → same# Matrix square — matrix multiplied by itselfA @ A# → [[ 7, 10],# [15, 22]]np.linalg.matrix_power(A, 2) # → same
15. Hadamard product vs matrix product — side by side
18. np.linalg.matrix_power() — raise matrix to a power
np.linalg.matrix_power(A, 2) # → A @ Anp.linalg.matrix_power(A, 3) # → A @ A @ Anp.linalg.matrix_power(A, -1) # → inverse of A (same as linalg.inv)np.linalg.matrix_power(A, 0) # → Identity matrix
PART 5 — COMMON MISTAKES
19. Mistake reference
# ❌ Using * for matrix multiplication — gives element-wiseA*B# → [[ 5, 12], [21, 32]] WRONG if you wanted matrix multiply# ✅ Use @ for matrix multiplicationA @ B# → [[19, 22], [43, 50]] CORRECT# ❌ Forgetting shape rulesA_rect @ A_rect# ValueError — (2,3) inner ≠ (2,3)# ✅ Transpose to fix shapeA_rect @ A_rect.T# (2,3) @ (3,2) → (2,2) ✓A_rect.T @ A_rect# (3,2) @ (2,3) → (3,3) ✓# ❌ Assuming commutativityA @ B==B @ A# False — matrix mult order matters# ❌ np.matmul with scalarnp.matmul(A, 3) # ValueError# ✅ Use * for scalarA*3# → [[ 3, 6], [ 9, 12]]
🧠 Quick Reference
Operation
Syntax
Shape rule
Element-wise multiply
a * b or np.multiply(a,b)
same shape or broadcastable
Scalar multiply
a * 3
any shape
Dot product (1D)
a @ b or np.dot(a,b)
(n,) @ (n,) → scalar
Matrix multiply
A @ B or np.matmul(A,B)
(m,k) @ (k,n) → (m,n)
Matrix × vector
A @ v
(m,n) @ (n,) → (m,)
Outer product
np.outer(x, y)
(m,) × (n,) → (m,n)
Element-wise square
A ** 2 or np.square(A)
same shape
Matrix square
A @ A
(n,n) @ (n,n) → (n,n)
Transpose
A.T
(m,n) → (n,m)
Inverse
np.linalg.inv(A)
square matrix only
Solve Ax=b
np.linalg.solve(A, b)
A square, b vector
Matrix power
np.linalg.matrix_power(A,n)
square matrix only
🧠 Mental Model
Two completely different operations — easy to confuse:
Element-wise * → same position × same position
shape stays the same
[1,2] * [3,4] = [3, 8]
Matrix mult @ → rows × columns, then sum
inner dims must match, outer dims survive
(2,3) @ (3,4) → (2,4)
When to use which:
* → scaling, masking, component-wise operations
@ → linear transformations, dot products, neural networks
Shape rule for @:
(m, k) @ (k, n) → (m, n)
↑↑
these must match
Order matters for @:
A @ B ≠ B @ A (unlike multiplication of scalars)
np.dot vs @ vs np.matmul:
@ → preferred, clean syntax, same as matmul
np.matmul → same as @ but no scalar support
np.dot → works for scalars too, legacy API