Tensor Indexing and Slicing#
NKIPy tensors support a subset of NumPy’s indexing and slicing operations, optimized for machine learning workloads.
Supported Operations#
Operation |
Example |
Description |
|---|---|---|
Basic slicing |
|
Slice with start:stop |
Step slicing |
|
Every nth element |
Negative slicing |
|
Negative bounds |
Negative indexing |
|
Single negative indices |
Array indexing |
|
Index with array (single axis) |
List indexing |
|
Index with Python list |
Mixed indexing |
|
Static slice + dynamic index |
Scalar indexing |
|
Use 0D tensor as index |
Assignment |
|
Slice assignment |
Chained slicing |
|
Multiple slice operations |
Unsupported Operations#
Operation |
Example |
Workaround |
|---|---|---|
Multiple dynamic indices |
|
Use separate operations |
Boolean indexing |
|
Use |
Ellipsis |
|
Use explicit slicing |
newaxis/None |
|
Use |
Limitations#
Single dynamic index: Only one tensor/array index per operation is supported. For multiple dynamic indices, use separate operations:
# ❌ Not supported
tensor[batch_idx, seq_idx, :]
# ✅ Use separate operations
batch_selected = tensor[batch_idx, :, :]
result = batch_selected[:, seq_idx, :]
Examples#
Batch Selection#
def select_batches(tensor, batch_indices):
return tensor[batch_indices, :, :]
# Usage
batch_tensor = np.random.randn(8, 32, 64) # [batch, seq, hidden]
indices = np.array([0, 2, 4, 6])
result = select_batches(batch_tensor, indices) # Shape: (4, 32, 64)
Token Selection#
def select_tokens(sequences, token_positions):
return sequences[:, token_positions, :]
# Usage
sequences = np.random.randn(4, 128, 768) # [batch, seq_len, hidden]
positions = np.array([0, 10, 20, 30])
result = select_tokens(sequences, positions) # Shape: (4, 4, 768)
Embedding Lookup#
def embedding_lookup(embeddings, token_ids):
flat_tokens = np.reshape(token_ids, (-1,))
selected = embeddings[flat_tokens]
batch_size, seq_len = token_ids.shape
return np.reshape(selected, (batch_size, seq_len, embeddings.shape[1]))
# Usage
embeddings = np.random.randn(10000, 512) # Vocabulary embeddings
token_ids = np.random.randint(0, 10000, (8, 64))
result = embedding_lookup(embeddings, token_ids) # Shape: (8, 64, 512)
Scalar Tensor Indexing#
def select_expert(top_k_indices, expert_weights):
expert_idx = top_k_indices[0] # 0D tensor (scalar)
return expert_weights[expert_idx]
# Usage - Mixture of Experts routing
top_k_indices = np.array([1, 0, 2], dtype=np.int32)
expert_weights = np.array([[0.8, 0.6], [0.4, 0.7], [0.5, 0.9]])
result = select_expert(top_k_indices, expert_weights) # Shape: (2,)