Tensor操作

合并与分割

concat 拼接

axis=? 合并某一个维度的数据

要求除了要拼接的维度外的所有维度都相等

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
In [2]: a = tf.ones([4,35,8])
In [3]: b = tf.zeros([2,35,8])
In [4]: c = tf.concat([a,b],axis = 0)
TensorShape([6, 35, 8])
In [6]: b = tf.zeros([4,3,8])
In [7]: c = tf.concat([a,b],axis = 1)
TensorShape([4, 38, 8])

In [10]: a = tf.ones([4,3])
In [11]: b = tf.zeros([4,4])
In [13]: c = tf.concat([a,b],axis = 1)
<tf.Tensor: shape=(4, 7), dtype=float32, numpy=
array([[1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 0., 0., 0., 0.]], dtype=float32)>

stack 创建新的维度

所有维度必须完全一致

1
2
3
4
5
In [22]: a = tf.ones([4,35,8])
In [23]: b = tf.zeros([4,35,8])

In [24]: tf.stack([a,b],axis=0).shape
Out[24]: TensorShape([2, 4, 35, 8])

unstack 拆分

维度上有多少就拆分成多少

1
2
3
4
5
6
7
8
9
10
11
12
In [26]: a = tf.ones([4,35,8])
In [27]: b = tf.ones([4,35,8])
In [28]: c = tf.stack([a,b])
TensorShape([2, 4, 35, 8])

In [31]: aa,bb = tf.unstack(c,axis = 0)
In [32]: aa.shape,bb.shape
Out[32]: (TensorShape([4, 35, 8]), TensorShape([4, 35, 8]))

In [33]: res = tf.unstack(c,axis=3)
In [37]: res[0].shape,res[7].shape
Out[37]: (TensorShape([2, 4, 35]), TensorShape([2, 4, 35]))

Split

比unstack灵活性更强

1
2
3
4
5
6
7
8
9
10
11
12
In [39]: res = tf.split(c,axis=3,num_or_size_splits=2)
In [40]: len(res)
Out[40]: 2
In [41]: res[0].shape
Out[41]: TensorShape([2, 4, 35, 4])

In [42]: res = tf.split(c,axis=3,num_or_size_splits=[2,2,4])
In [43]: res[0].shape,res[1].shape,res[2].shape
Out[43]:
(TensorShape([2, 4, 35, 2]),
TensorShape([2, 4, 35, 2]),
TensorShape([2, 4, 35, 4]))

数据统计

Vector Norm 向量范数

二范数:平方和开根号

无穷范数:元素中最大值的绝对值

一范数:绝对值的和

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#norm Vector Norm 向量范数 默认为二范数
In [2]: a = tf.ones([2,2])

In [3]: tf.norm(a)
Out[3]: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>
In [4]: tf.sqrt(tf.reduce_sum(tf.square(a)))
Out[4]: <tf.Tensor: shape=(), dtype=float32, numpy=2.0>

In [5]: a = tf.ones([4,28,28,3])

In [6]: tf.norm(a)
Out[6]: <tf.Tensor: shape=(), dtype=float32, numpy=96.99484>
In [7]: tf.sqrt(tf.reduce_sum(tf.square(a)))
Out[7]: <tf.Tensor: shape=(), dtype=float32, numpy=96.99484>

ord指定范数

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
In [23]: a = tf.ones([2])
In [24]: b= tf.fill([2],2.)
In [25]: c = tf.stack([a,b],axis=0)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1., 1.],
[2., 2.]], dtype=float32)>

In [27]: tf.norm(c,ord=1)
Out[27]: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>

In [28]: tf.norm(c,ord=1,axis=0)
Out[28]: <tf.Tensor: shape=(2,), dtype=float32, numpy=array([3., 3.], dtype=float32)>

In [29]: tf.norm(c,ord=1,axis=1)
Out[29]: <tf.Tensor: shape=(2,), dtype=float32, numpy=array([2., 4.], dtype=float32)>

reduce_min/max/mean

最小值/最大值/平均值

指定axis后会求某维度的均值,不指定求全均值

1
2
3
4
5
6
7
8
In [33]: tf.reduce_min(c),tf.reduce_max(c),tf.reduce_mean(c)
Out[33]:
(<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=2.0>,
<tf.Tensor: shape=(), dtype=float32, numpy=1.5>)

In [34]: tf.reduce_min(c , axis=1)
Out[34]: <tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)>

argmax/argmin

最大值最小值所在的位置,默认axis为0

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [31]: a = tf.random.normal([4,10])
In [35]: tf.argmax(a)
Out[35]: <tf.Tensor: shape=(10,), dtype=int64, numpy=array([1, 2, 3, 0, 0, 2, 3, 0, 2, 1])>

In [51]: d
Out[51]:
<tf.Tensor: shape=(3, 2), dtype=float32, numpy=
array([[1., 1.],
[2., 2.],
[0., 3.]], dtype=float32)>

In [52]: tf.argmax(d)
Out[52]: <tf.Tensor: shape=(2,), dtype=int64, numpy=array([1, 2])>

In [53]: tf.argmin(d)
Out[53]: <tf.Tensor: shape=(2,), dtype=int64, numpy=array([2, 0])>

In [54]: tf.argmax(d,axis=1)
Out[54]: <tf.Tensor: shape=(3,), dtype=int64, numpy=array([0, 0, 1])>

equal

1
2
3
4
5
6
7
8
9
In [56]: b= tf.range(5)
In [58]: a = tf.constant([0,1,3,2,4])

In [59]: tf.equal(a,b)
Out[59]: <tf.Tensor: shape=(5,), dtype=bool, numpy=array([ True, True, False, False, True])>

In [60]: res = tf.equal(a,b)
In [62]: tf.reduce_sum(tf.cast(res,dtype=tf.int32)) #cast数据类型转换 得到同样的个数
Out[62]: <tf.Tensor: shape=(), dtype=int32, numpy=3>

Accuary

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
In [64]: a
Out[64]:
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[0.1 , 0.2 , 0.7 ],
[0.9 , 0.05, 0.05]], dtype=float32)>

In [67]: pred = tf.cast(tf.argmax(a,axis=1),dtype=tf.int32)
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([2, 0], dtype=int32)>

In [72]: y
Out[72]: <tf.Tensor: shape=(2,), dtype=int32, numpy=array([2, 1], dtype=int32)>

In [82]: correct = tf.reduce_mean(tf.cast(tf.equal(y,pred),dtype = tf.float32))
In [83]: correct
Out[83]: <tf.Tensor: shape=(), dtype=float32, numpy=0.5>

unique

1
2
3
4
5
6
7
8
9
10
In [84]: a = tf.constant([4,2,2,4,3])
In [87]: unique,idx = tf.unique(a)
In [88]: unique
Out[88]: <tf.Tensor: shape=(3,), dtype=int32, numpy=array([4, 2, 3], dtype=int32)>

In [89]: idx
Out[89]: <tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 1, 1, 0, 2], dtype=int32)>

In [90]: tf.gather(unique,idx) #前按后中存储索引排列
Out[90]: <tf.Tensor: shape=(5,), dtype=int32, numpy=array([4, 2, 2, 4, 3], dtype=int32)>

张量排序

  • Sort,argsort

一维

1
2
3
4
5
6
7
8
9
10
11
In [3]: a = tf.random.shuffle(tf.range(5))
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([2, 3, 0, 4, 1], dtype=int32)>

In [5]: tf.sort(a,direction = 'DESCENDING')#降序
Out[5]: <tf.Tensor: shape=(5,), dtype=int32, numpy=array([4, 3, 2, 1, 0], dtype=int32)>
In [6]: tf.argsort(a,direction = 'DESCENDING')#降序排序后原所在位置
Out[6]: <tf.Tensor: shape=(5,), dtype=int32, numpy=array([3, 1, 0, 4, 2], dtype=int32)>

In [7]: idx = tf.argsort(a,direction = 'DESCENDING')
In [8]: tf.gather(a,idx)
Out[8]: <tf.Tensor: shape=(5,), dtype=int32, numpy=array([4, 3, 2, 1, 0], dtype=int32)>

高维,对最后一个维度排序

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
In [9]: a = tf.random.uniform([3,3],maxval=10,dtype=tf.int32)
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[8, 0, 8],
[9, 2, 8],
[2, 4, 9]], dtype=int32)>

In [11]: tf.sort(a)
Out[11]:
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[0, 8, 8],
[2, 8, 9],
[2, 4, 9]], dtype=int32)>

In [12]: tf.argsort(a)
Out[12]:
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[1, 0, 2],
[1, 2, 0],
[0, 1, 2]], dtype=int32)>
  • top_k

最大的或最小的前几个

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
In [13]: a
Out[13]:
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
array([[8, 0, 8],
[9, 2, 8],
[2, 4, 9]], dtype=int32)>

In [14]: res = tf.math.top_k(a,2)#返回一个tuple

In [15]: res.indices#argsort 前2
Out[15]:
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[0, 2],
[0, 2],
[2, 1]], dtype=int32)>
In [16]: res.values#sort 前2
Out[16]:
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[8, 8],
[9, 8],
[9, 4]], dtype=int32)>
  • Top-k Accuracy

在前k中有匹配的就算为正确,也可以检验模型的好坏

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#topk=(1,2,3) 返回top1,2,3这三个的topAccuracy output[b,N],target[b]
def accuracy(output,target,topk=(1,)):
maxk = max (topk)
batch_size = target.shape[0]

pred = tf.math.top_k(output,maxk).indices
pred = tf.transpose(pred,prem=[1,0])# 转置
target_ = tf.broadcast_to(target,pred.shape)
correct = tf.equal(pred, target_)

res = []
for k in topk:
correct_k = tf.cast(tf.reshape(correct[:k],[-1]),dtype = tf.float32)
correct_k = tf.reduce_sum(correct_k)
acc = float(correct_k/batch_size)
res.append(acc)
return res

数据填充、复制

填充 pad

  • 一维 [3] ->前补充1位,后补充2位->一维[6]([[1,2]]意思为:一维前补1后补2)
  • 二维[2,2]->行:前不补,后补1行;列:前补1列,后补1列->[3,4]([[0,1],[1,1]]意思为:二维,第一维前补0后补1,第二维前补1后补1)
  • 默认补0
1
2
3
4
5
6
7
8
In [21]: a = tf.reshape(tf.range(4),[2,2])

In [22]: tf.pad(a,[[0,1],[1,1]])
Out[22]:
<tf.Tensor: shape=(3, 4), dtype=int32, numpy=
array([[0, 0, 1, 0],
[0, 2, 3, 0],
[0, 0, 0, 0]], dtype=int32)>

复制 tile

  • [a,b] 代表第一个维度复制a次,第二个维度复制b次
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
In [24]: tf.tile(a,[1,2])
Out[24]:
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[0, 1, 0, 1],
[2, 3, 2, 3]], dtype=int32)>

In [25]: tf.tile(a,[2,2])
Out[25]:
<tf.Tensor: shape=(4, 4), dtype=int32, numpy=
array([[0, 1, 0, 1],
[2, 3, 2, 3],
[0, 1, 0, 1],
[2, 3, 2, 3]], dtype=int32)>

In [26]: tf.tile(a,[3,2])
Out[26]:
<tf.Tensor: shape=(6, 4), dtype=int32, numpy=
array([[0, 1, 0, 1],
[2, 3, 2, 3],
[0, 1, 0, 1],
[2, 3, 2, 3],
[0, 1, 0, 1],
[2, 3, 2, 3]], dtype=int32)>

In [27]: tf.tile(a,[1,1])
Out[27]:
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[0, 1],
[2, 3]], dtype=int32)>

张量限幅

  • maximum 若值小于a 则使该值=a
  • minimum 若值大于b 则使该值=b
  • clip_by_value 若值小于a 则使该值=a,若值大于b 则使该值=b(限制数据在a与b之间)
1
2
3
4
5
6
7
8
9
10
11
In [3]: a
Out[3]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)>

In [4]: tf.maximum(a,2)
Out[4]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([2, 2, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int32)>

In [5]: tf.minimum(a,8)
Out[5]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 1, 2, 3, 4, 5, 6, 7, 8, 8], dtype=int32)>

In [6]: tf.clip_by_value(a,2,8) #同tf.minimum(tf.maximum(a,2),8)
Out[6]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([2, 2, 2, 3, 4, 5, 6, 7, 8, 8], dtype=int32)>

relu函数max(0,x)

1
2
3
4
5
6
7
8
In [9]: a
Out[9]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([-5, -4, -3, -2, -1, 0, 1, 2, 3, 4], dtype=int32)>

In [10]: tf.nn.relu(a)
Out[10]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 1, 2, 3, 4], dtype=int32)>

In [11]: tf.maximum(a,0)
Out[11]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([0, 0, 0, 0, 0, 0, 1, 2, 3, 4], dtype=int32)>
  • clip_by_norm

在clipping时,如果只根据值裁剪的话gradient的方向会发生变化,不利于找到最优解,使用范数限幅可以进行等比例放缩解决这样的问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
In [14]: a = tf.random.normal([2,2],mean=10)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[12.561244, 9.496627],
[10.774074, 10.137045]], dtype=float32)>

In [16]: tf.norm(a)
Out[16]: <tf.Tensor: shape=(), dtype=float32, numpy=21.605812>

In [17]: aa = tf.clip_by_norm(a,15)
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[8.720739, 6.593106],
[7.479983, 7.037721]], dtype=float32)>

In [20]: tf.norm(aa)
Out[20]: <tf.Tensor: shape=(), dtype=float32, numpy=15.0>
  • Grandient clipping

new_grads,total_norm = tf.clip_by_global_norm(grad,25) 一组grad的范数裁剪,返回裁剪后的grad和原始总范数

其他操作

where

  • where(tensor) 只接受一个参数是一个bool型的tensor,会返回一系列坐标 (True)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
In [23]: a
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[ 1.7502894 , 0.42312455, -0.05275195],
[-1.2338606 , 0.21349485, 1.2465866 ],
[ 0.11755344, -0.45547193, 1.792042 ]], dtype=float32)>

In [24]: mask = a > 0
<tf.Tensor: shape=(3, 3), dtype=bool, numpy=
array([[ True, True, False],
[False, True, True],
[ True, False, True]])>

In [26]: tf.boolean_mask(a,mask)
Out[26]:
<tf.Tensor: shape=(6,), dtype=float32, numpy=
array([1.7502894 , 0.42312455, 0.21349485, 1.2465866 , 0.11755344,
1.792042 ], dtype=float32)>

In [27]: indices = tf.where(mask)
<tf.Tensor: shape=(6, 2), dtype=int64, numpy=
array([[0, 0],
[0, 1],
[1, 1],
[1, 2],
[2, 0],
[2, 2]])>

In [29]: tf.gather_nd(a,indices)
Out[29]:
<tf.Tensor: shape=(6,), dtype=float32, numpy=
array([1.7502894 , 0.42312455, 0.21349485, 1.2465866 , 0.11755344,
1.792042 ], dtype=float32)>
  • where(cond,A,B) cond 为True的地方取A cond为False的地方取B
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
In [30]: mask
Out[30]:
<tf.Tensor: shape=(3, 3), dtype=bool, numpy=
array([[ True, True, False],
[False, True, True],
[ True, False, True]])>

In [31]: A=tf.ones([3,3])

In [32]: B=tf.zeros([3,3])

In [33]: tf.where(mask,A,B)
Out[33]:
<tf.Tensor: shape=(3, 3), dtype=float32, numpy=
array([[1., 1., 0.],
[0., 1., 1.],
[1., 0., 1.]], dtype=float32)>

scatter_nd

根据坐标有目的性的更新 根据indices指出的位置,将updates当中的数据按照indices顺序与指出的位置放在,shape形状的Tensor上

只能在全0底板上更新

1
2
3
4
5
6
In [34]: indices = tf.constant([[4],[3],[1],[7]])
In [35]: updates = tf.constant([9,10,11,12])
In [36]: shape = tf.constant([8])

In [38]: tf.scatter_nd(indices,updates,shape)
Out[38]: <tf.Tensor: shape=(8,), dtype=int32, numpy=array([ 0, 11, 0, 10, 9, 0, 0, 12], dtype=int32)>

meshgrid

  • point 生成坐标
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
In [39]: points = []

In [40]: import numpy as np
#-22 5个点 xy共25个点
In [41]: for y in np.linspace(-2,2,5):
...: for x in np.linspace(-2,2,5):
...: points.append([x,y])
...:

In [42]: points
Out[42]:
[[-2.0, -2.0],
[-1.0, -2.0],
[0.0, -2.0],
[1.0, -2.0],
[2.0, -2.0],
[-2.0, -1.0],
[-1.0, -1.0],
[0.0, -1.0],
[1.0, -1.0],
[2.0, -1.0],
[-2.0, 0.0],
[-1.0, 0.0],
[0.0, 0.0],
[1.0, 0.0],
[2.0, 0.0],
[-2.0, 1.0],
[-1.0, 1.0],
[0.0, 1.0],
[1.0, 1.0],
[2.0, 1.0],
[-2.0, 2.0],
[-1.0, 2.0],
[0.0, 2.0],
[1.0, 2.0],
[2.0, 2.0]]
  • 使用meshgrid
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
In [43]:  y = tf.linspace(-2.,2,5)
Out[44]: <tf.Tensor: shape=(5,), dtype=float32, numpy=array([-2., -1., 0., 1., 2.], dtype=float32)>

In [45]: x = tf.linspace(-2.,2,5)
Out[46]: <tf.Tensor: shape=(5,), dtype=float32, numpy=array([-2., -1., 0., 1., 2.], dtype=float32)>

In [47]: points_x,points_y = tf.meshgrid(x,y)

In [49]: points_x.shape
Out[49]: TensorShape([5, 5])

In [50]: points = tf.stack([points_x,points_y],axis = 2)
<tf.Tensor: shape=(5, 5, 2), dtype=float32, numpy=
array([[[-2., -2.],
[-1., -2.],
[ 0., -2.],
[ 1., -2.],
[ 2., -2.]],

[[-2., -1.],
[-1., -1.],
[ 0., -1.],
[ 1., -1.],
[ 2., -1.]],
...

绘制函数等高线

z = sinx+siny

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import tensorflow as tf

import matplotlib.pyplot as plt


def func(x):
# z = sinx + siny
z = tf.math.sin(x[...,0]) + tf.math.sin(x[...,1])

return z


x = tf.linspace(0., 2*3.14, 500)
y = tf.linspace(0., 2*3.14, 500)
# [500, 500]
point_x, point_y = tf.meshgrid(x, y)
# [500, 500, 2]
points = tf.stack([point_x, point_y], axis=2)
# points = tf.reshape(points, [-1, 2])
print('points:', points.shape)
z = func(points)
print('z:', z.shape)

plt.figure('plot 2d func value')
plt.imshow(z, origin='lower', interpolation='none')
plt.colorbar()

plt.figure('plot 2d func contour')
plt.contour(point_x, point_y, z)
plt.colorbar()
plt.show()

打赏
  • 版权声明: 本博客所有文章除特别声明外,均采用 Apache License 2.0 许可协议。转载请注明出处!

请我喝杯咖啡吧~

支付宝
微信