Struct valence_math::u64::U64Vec4
source · #[repr(C)]pub struct U64Vec4 {
pub x: u64,
pub y: u64,
pub z: u64,
pub w: u64,
}
Expand description
A 4-dimensional vector.
Fields§
§x: u64
§y: u64
§z: u64
§w: u64
Implementations§
source§impl U64Vec4
impl U64Vec4
sourcepub fn map<F>(self, f: F) -> U64Vec4
pub fn map<F>(self, f: F) -> U64Vec4
Returns a vector containing each element of self
modified by a mapping function f
.
sourcepub fn select(mask: BVec4, if_true: U64Vec4, if_false: U64Vec4) -> U64Vec4
pub fn select(mask: BVec4, if_true: U64Vec4, if_false: U64Vec4) -> U64Vec4
Creates a vector from the elements in if_true
and if_false
, selecting which to use
for each element of self
.
A true element in the mask uses the corresponding element from if_true
, and false
uses the element from if_false
.
sourcepub const fn from_array(a: [u64; 4]) -> U64Vec4
pub const fn from_array(a: [u64; 4]) -> U64Vec4
Creates a new vector from an array.
sourcepub const fn from_slice(slice: &[u64]) -> U64Vec4
pub const fn from_slice(slice: &[u64]) -> U64Vec4
Creates a vector from the first 4 values in slice
.
§Panics
Panics if slice
is less than 4 elements long.
sourcepub fn write_to_slice(self, slice: &mut [u64])
pub fn write_to_slice(self, slice: &mut [u64])
Writes the elements of self
to the first 4 elements in slice
.
§Panics
Panics if slice
is less than 4 elements long.
sourcepub fn truncate(self) -> U64Vec3
pub fn truncate(self) -> U64Vec3
Creates a 3D vector from the x
, y
and z
elements of self
, discarding w
.
Truncation to U64Vec3
may also be performed by using self.xyz()
.
sourcepub fn with_x(self, x: u64) -> U64Vec4
pub fn with_x(self, x: u64) -> U64Vec4
Creates a 4D vector from self
with the given value of x
.
sourcepub fn with_y(self, y: u64) -> U64Vec4
pub fn with_y(self, y: u64) -> U64Vec4
Creates a 4D vector from self
with the given value of y
.
sourcepub fn with_z(self, z: u64) -> U64Vec4
pub fn with_z(self, z: u64) -> U64Vec4
Creates a 4D vector from self
with the given value of z
.
sourcepub fn with_w(self, w: u64) -> U64Vec4
pub fn with_w(self, w: u64) -> U64Vec4
Creates a 4D vector from self
with the given value of w
.
sourcepub fn dot_into_vec(self, rhs: U64Vec4) -> U64Vec4
pub fn dot_into_vec(self, rhs: U64Vec4) -> U64Vec4
Returns a vector where every component is the dot product of self
and rhs
.
sourcepub fn min(self, rhs: U64Vec4) -> U64Vec4
pub fn min(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the minimum values for each element of self
and rhs
.
In other words this computes [self.x.min(rhs.x), self.y.min(rhs.y), ..]
.
sourcepub fn max(self, rhs: U64Vec4) -> U64Vec4
pub fn max(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the maximum values for each element of self
and rhs
.
In other words this computes [self.x.max(rhs.x), self.y.max(rhs.y), ..]
.
sourcepub fn clamp(self, min: U64Vec4, max: U64Vec4) -> U64Vec4
pub fn clamp(self, min: U64Vec4, max: U64Vec4) -> U64Vec4
Component-wise clamping of values, similar to u64::clamp
.
Each element in min
must be less-or-equal to the corresponding element in max
.
§Panics
Will panic if min
is greater than max
when glam_assert
is enabled.
sourcepub fn min_element(self) -> u64
pub fn min_element(self) -> u64
Returns the horizontal minimum of self
.
In other words this computes min(x, y, ..)
.
sourcepub fn max_element(self) -> u64
pub fn max_element(self) -> u64
Returns the horizontal maximum of self
.
In other words this computes max(x, y, ..)
.
sourcepub fn element_sum(self) -> u64
pub fn element_sum(self) -> u64
Returns the sum of all elements of self
.
In other words, this computes self.x + self.y + ..
.
sourcepub fn element_product(self) -> u64
pub fn element_product(self) -> u64
Returns the product of all elements of self
.
In other words, this computes self.x * self.y * ..
.
sourcepub fn cmpeq(self, rhs: U64Vec4) -> BVec4
pub fn cmpeq(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a ==
comparison for each element of
self
and rhs
.
In other words, this computes [self.x == rhs.x, self.y == rhs.y, ..]
for all
elements.
sourcepub fn cmpne(self, rhs: U64Vec4) -> BVec4
pub fn cmpne(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a !=
comparison for each element of
self
and rhs
.
In other words this computes [self.x != rhs.x, self.y != rhs.y, ..]
for all
elements.
sourcepub fn cmpge(self, rhs: U64Vec4) -> BVec4
pub fn cmpge(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a >=
comparison for each element of
self
and rhs
.
In other words this computes [self.x >= rhs.x, self.y >= rhs.y, ..]
for all
elements.
sourcepub fn cmpgt(self, rhs: U64Vec4) -> BVec4
pub fn cmpgt(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a >
comparison for each element of
self
and rhs
.
In other words this computes [self.x > rhs.x, self.y > rhs.y, ..]
for all
elements.
sourcepub fn cmple(self, rhs: U64Vec4) -> BVec4
pub fn cmple(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a <=
comparison for each element of
self
and rhs
.
In other words this computes [self.x <= rhs.x, self.y <= rhs.y, ..]
for all
elements.
sourcepub fn cmplt(self, rhs: U64Vec4) -> BVec4
pub fn cmplt(self, rhs: U64Vec4) -> BVec4
Returns a vector mask containing the result of a <
comparison for each element of
self
and rhs
.
In other words this computes [self.x < rhs.x, self.y < rhs.y, ..]
for all
elements.
sourcepub fn length_squared(self) -> u64
pub fn length_squared(self) -> u64
Computes the squared length of self
.
sourcepub fn as_i16vec4(&self) -> I16Vec4
pub fn as_i16vec4(&self) -> I16Vec4
Casts all elements of self
to i16
.
sourcepub fn as_u16vec4(&self) -> U16Vec4
pub fn as_u16vec4(&self) -> U16Vec4
Casts all elements of self
to u16
.
sourcepub fn as_i64vec4(&self) -> I64Vec4
pub fn as_i64vec4(&self) -> I64Vec4
Casts all elements of self
to i64
.
sourcepub const fn wrapping_add(self, rhs: U64Vec4) -> U64Vec4
pub const fn wrapping_add(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the wrapping addition of self
and rhs
.
In other words this computes [self.x.wrapping_add(rhs.x), self.y.wrapping_add(rhs.y), ..]
.
sourcepub const fn wrapping_sub(self, rhs: U64Vec4) -> U64Vec4
pub const fn wrapping_sub(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the wrapping subtraction of self
and rhs
.
In other words this computes [self.x.wrapping_sub(rhs.x), self.y.wrapping_sub(rhs.y), ..]
.
sourcepub const fn wrapping_mul(self, rhs: U64Vec4) -> U64Vec4
pub const fn wrapping_mul(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the wrapping multiplication of self
and rhs
.
In other words this computes [self.x.wrapping_mul(rhs.x), self.y.wrapping_mul(rhs.y), ..]
.
sourcepub const fn wrapping_div(self, rhs: U64Vec4) -> U64Vec4
pub const fn wrapping_div(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the wrapping division of self
and rhs
.
In other words this computes [self.x.wrapping_div(rhs.x), self.y.wrapping_div(rhs.y), ..]
.
sourcepub const fn saturating_add(self, rhs: U64Vec4) -> U64Vec4
pub const fn saturating_add(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the saturating addition of self
and rhs
.
In other words this computes [self.x.saturating_add(rhs.x), self.y.saturating_add(rhs.y), ..]
.
sourcepub const fn saturating_sub(self, rhs: U64Vec4) -> U64Vec4
pub const fn saturating_sub(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the saturating subtraction of self
and rhs
.
In other words this computes [self.x.saturating_sub(rhs.x), self.y.saturating_sub(rhs.y), ..]
.
sourcepub const fn saturating_mul(self, rhs: U64Vec4) -> U64Vec4
pub const fn saturating_mul(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the saturating multiplication of self
and rhs
.
In other words this computes [self.x.saturating_mul(rhs.x), self.y.saturating_mul(rhs.y), ..]
.
sourcepub const fn saturating_div(self, rhs: U64Vec4) -> U64Vec4
pub const fn saturating_div(self, rhs: U64Vec4) -> U64Vec4
Returns a vector containing the saturating division of self
and rhs
.
In other words this computes [self.x.saturating_div(rhs.x), self.y.saturating_div(rhs.y), ..]
.
sourcepub const fn wrapping_add_signed(self, rhs: I64Vec4) -> U64Vec4
pub const fn wrapping_add_signed(self, rhs: I64Vec4) -> U64Vec4
Returns a vector containing the wrapping addition of self
and signed vector rhs
.
In other words this computes [self.x.wrapping_add_signed(rhs.x), self.y.wrapping_add_signed(rhs.y), ..]
.
sourcepub const fn saturating_add_signed(self, rhs: I64Vec4) -> U64Vec4
pub const fn saturating_add_signed(self, rhs: I64Vec4) -> U64Vec4
Returns a vector containing the saturating addition of self
and signed vector rhs
.
In other words this computes [self.x.saturating_add_signed(rhs.x), self.y.saturating_add_signed(rhs.y), ..]
.
Trait Implementations§
source§impl AddAssign<&U64Vec4> for U64Vec4
impl AddAssign<&U64Vec4> for U64Vec4
source§fn add_assign(&mut self, rhs: &U64Vec4)
fn add_assign(&mut self, rhs: &U64Vec4)
+=
operation. Read moresource§impl AddAssign<&u64> for U64Vec4
impl AddAssign<&u64> for U64Vec4
source§fn add_assign(&mut self, rhs: &u64)
fn add_assign(&mut self, rhs: &u64)
+=
operation. Read moresource§impl AddAssign<u64> for U64Vec4
impl AddAssign<u64> for U64Vec4
source§fn add_assign(&mut self, rhs: u64)
fn add_assign(&mut self, rhs: u64)
+=
operation. Read moresource§impl AddAssign for U64Vec4
impl AddAssign for U64Vec4
source§fn add_assign(&mut self, rhs: U64Vec4)
fn add_assign(&mut self, rhs: U64Vec4)
+=
operation. Read moresource§impl DivAssign<&U64Vec4> for U64Vec4
impl DivAssign<&U64Vec4> for U64Vec4
source§fn div_assign(&mut self, rhs: &U64Vec4)
fn div_assign(&mut self, rhs: &U64Vec4)
/=
operation. Read moresource§impl DivAssign<&u64> for U64Vec4
impl DivAssign<&u64> for U64Vec4
source§fn div_assign(&mut self, rhs: &u64)
fn div_assign(&mut self, rhs: &u64)
/=
operation. Read moresource§impl DivAssign<u64> for U64Vec4
impl DivAssign<u64> for U64Vec4
source§fn div_assign(&mut self, rhs: u64)
fn div_assign(&mut self, rhs: u64)
/=
operation. Read moresource§impl DivAssign for U64Vec4
impl DivAssign for U64Vec4
source§fn div_assign(&mut self, rhs: U64Vec4)
fn div_assign(&mut self, rhs: U64Vec4)
/=
operation. Read moresource§impl MulAssign<&U64Vec4> for U64Vec4
impl MulAssign<&U64Vec4> for U64Vec4
source§fn mul_assign(&mut self, rhs: &U64Vec4)
fn mul_assign(&mut self, rhs: &U64Vec4)
*=
operation. Read moresource§impl MulAssign<&u64> for U64Vec4
impl MulAssign<&u64> for U64Vec4
source§fn mul_assign(&mut self, rhs: &u64)
fn mul_assign(&mut self, rhs: &u64)
*=
operation. Read moresource§impl MulAssign<u64> for U64Vec4
impl MulAssign<u64> for U64Vec4
source§fn mul_assign(&mut self, rhs: u64)
fn mul_assign(&mut self, rhs: u64)
*=
operation. Read moresource§impl MulAssign for U64Vec4
impl MulAssign for U64Vec4
source§fn mul_assign(&mut self, rhs: U64Vec4)
fn mul_assign(&mut self, rhs: U64Vec4)
*=
operation. Read moresource§impl RemAssign<&U64Vec4> for U64Vec4
impl RemAssign<&U64Vec4> for U64Vec4
source§fn rem_assign(&mut self, rhs: &U64Vec4)
fn rem_assign(&mut self, rhs: &U64Vec4)
%=
operation. Read moresource§impl RemAssign<&u64> for U64Vec4
impl RemAssign<&u64> for U64Vec4
source§fn rem_assign(&mut self, rhs: &u64)
fn rem_assign(&mut self, rhs: &u64)
%=
operation. Read moresource§impl RemAssign<u64> for U64Vec4
impl RemAssign<u64> for U64Vec4
source§fn rem_assign(&mut self, rhs: u64)
fn rem_assign(&mut self, rhs: u64)
%=
operation. Read moresource§impl RemAssign for U64Vec4
impl RemAssign for U64Vec4
source§fn rem_assign(&mut self, rhs: U64Vec4)
fn rem_assign(&mut self, rhs: U64Vec4)
%=
operation. Read moresource§impl SubAssign<&U64Vec4> for U64Vec4
impl SubAssign<&U64Vec4> for U64Vec4
source§fn sub_assign(&mut self, rhs: &U64Vec4)
fn sub_assign(&mut self, rhs: &U64Vec4)
-=
operation. Read moresource§impl SubAssign<&u64> for U64Vec4
impl SubAssign<&u64> for U64Vec4
source§fn sub_assign(&mut self, rhs: &u64)
fn sub_assign(&mut self, rhs: &u64)
-=
operation. Read moresource§impl SubAssign<u64> for U64Vec4
impl SubAssign<u64> for U64Vec4
source§fn sub_assign(&mut self, rhs: u64)
fn sub_assign(&mut self, rhs: u64)
-=
operation. Read moresource§impl SubAssign for U64Vec4
impl SubAssign for U64Vec4
source§fn sub_assign(&mut self, rhs: U64Vec4)
fn sub_assign(&mut self, rhs: U64Vec4)
-=
operation. Read moresource§impl Vec4Swizzles for U64Vec4
impl Vec4Swizzles for U64Vec4
type Vec2 = U64Vec2
type Vec3 = U64Vec3
fn xx(self) -> U64Vec2
fn xy(self) -> U64Vec2
fn xz(self) -> U64Vec2
fn xw(self) -> U64Vec2
fn yx(self) -> U64Vec2
fn yy(self) -> U64Vec2
fn yz(self) -> U64Vec2
fn yw(self) -> U64Vec2
fn zx(self) -> U64Vec2
fn zy(self) -> U64Vec2
fn zz(self) -> U64Vec2
fn zw(self) -> U64Vec2
fn wx(self) -> U64Vec2
fn wy(self) -> U64Vec2
fn wz(self) -> U64Vec2
fn ww(self) -> U64Vec2
fn xxx(self) -> U64Vec3
fn xxy(self) -> U64Vec3
fn xxz(self) -> U64Vec3
fn xxw(self) -> U64Vec3
fn xyx(self) -> U64Vec3
fn xyy(self) -> U64Vec3
fn xyz(self) -> U64Vec3
fn xyw(self) -> U64Vec3
fn xzx(self) -> U64Vec3
fn xzy(self) -> U64Vec3
fn xzz(self) -> U64Vec3
fn xzw(self) -> U64Vec3
fn xwx(self) -> U64Vec3
fn xwy(self) -> U64Vec3
fn xwz(self) -> U64Vec3
fn xww(self) -> U64Vec3
fn yxx(self) -> U64Vec3
fn yxy(self) -> U64Vec3
fn yxz(self) -> U64Vec3
fn yxw(self) -> U64Vec3
fn yyx(self) -> U64Vec3
fn yyy(self) -> U64Vec3
fn yyz(self) -> U64Vec3
fn yyw(self) -> U64Vec3
fn yzx(self) -> U64Vec3
fn yzy(self) -> U64Vec3
fn yzz(self) -> U64Vec3
fn yzw(self) -> U64Vec3
fn ywx(self) -> U64Vec3
fn ywy(self) -> U64Vec3
fn ywz(self) -> U64Vec3
fn yww(self) -> U64Vec3
fn zxx(self) -> U64Vec3
fn zxy(self) -> U64Vec3
fn zxz(self) -> U64Vec3
fn zxw(self) -> U64Vec3
fn zyx(self) -> U64Vec3
fn zyy(self) -> U64Vec3
fn zyz(self) -> U64Vec3
fn zyw(self) -> U64Vec3
fn zzx(self) -> U64Vec3
fn zzy(self) -> U64Vec3
fn zzz(self) -> U64Vec3
fn zzw(self) -> U64Vec3
fn zwx(self) -> U64Vec3
fn zwy(self) -> U64Vec3
fn zwz(self) -> U64Vec3
fn zww(self) -> U64Vec3
fn wxx(self) -> U64Vec3
fn wxy(self) -> U64Vec3
fn wxz(self) -> U64Vec3
fn wxw(self) -> U64Vec3
fn wyx(self) -> U64Vec3
fn wyy(self) -> U64Vec3
fn wyz(self) -> U64Vec3
fn wyw(self) -> U64Vec3
fn wzx(self) -> U64Vec3
fn wzy(self) -> U64Vec3
fn wzz(self) -> U64Vec3
fn wzw(self) -> U64Vec3
fn wwx(self) -> U64Vec3
fn wwy(self) -> U64Vec3
fn wwz(self) -> U64Vec3
fn www(self) -> U64Vec3
fn xxxx(self) -> U64Vec4
fn xxxy(self) -> U64Vec4
fn xxxz(self) -> U64Vec4
fn xxxw(self) -> U64Vec4
fn xxyx(self) -> U64Vec4
fn xxyy(self) -> U64Vec4
fn xxyz(self) -> U64Vec4
fn xxyw(self) -> U64Vec4
fn xxzx(self) -> U64Vec4
fn xxzy(self) -> U64Vec4
fn xxzz(self) -> U64Vec4
fn xxzw(self) -> U64Vec4
fn xxwx(self) -> U64Vec4
fn xxwy(self) -> U64Vec4
fn xxwz(self) -> U64Vec4
fn xxww(self) -> U64Vec4
fn xyxx(self) -> U64Vec4
fn xyxy(self) -> U64Vec4
fn xyxz(self) -> U64Vec4
fn xyxw(self) -> U64Vec4
fn xyyx(self) -> U64Vec4
fn xyyy(self) -> U64Vec4
fn xyyz(self) -> U64Vec4
fn xyyw(self) -> U64Vec4
fn xyzx(self) -> U64Vec4
fn xyzy(self) -> U64Vec4
fn xyzz(self) -> U64Vec4
fn xywx(self) -> U64Vec4
fn xywy(self) -> U64Vec4
fn xywz(self) -> U64Vec4
fn xyww(self) -> U64Vec4
fn xzxx(self) -> U64Vec4
fn xzxy(self) -> U64Vec4
fn xzxz(self) -> U64Vec4
fn xzxw(self) -> U64Vec4
fn xzyx(self) -> U64Vec4
fn xzyy(self) -> U64Vec4
fn xzyz(self) -> U64Vec4
fn xzyw(self) -> U64Vec4
fn xzzx(self) -> U64Vec4
fn xzzy(self) -> U64Vec4
fn xzzz(self) -> U64Vec4
fn xzzw(self) -> U64Vec4
fn xzwx(self) -> U64Vec4
fn xzwy(self) -> U64Vec4
fn xzwz(self) -> U64Vec4
fn xzww(self) -> U64Vec4
fn xwxx(self) -> U64Vec4
fn xwxy(self) -> U64Vec4
fn xwxz(self) -> U64Vec4
fn xwxw(self) -> U64Vec4
fn xwyx(self) -> U64Vec4
fn xwyy(self) -> U64Vec4
fn xwyz(self) -> U64Vec4
fn xwyw(self) -> U64Vec4
fn xwzx(self) -> U64Vec4
fn xwzy(self) -> U64Vec4
fn xwzz(self) -> U64Vec4
fn xwzw(self) -> U64Vec4
fn xwwx(self) -> U64Vec4
fn xwwy(self) -> U64Vec4
fn xwwz(self) -> U64Vec4
fn xwww(self) -> U64Vec4
fn yxxx(self) -> U64Vec4
fn yxxy(self) -> U64Vec4
fn yxxz(self) -> U64Vec4
fn yxxw(self) -> U64Vec4
fn yxyx(self) -> U64Vec4
fn yxyy(self) -> U64Vec4
fn yxyz(self) -> U64Vec4
fn yxyw(self) -> U64Vec4
fn yxzx(self) -> U64Vec4
fn yxzy(self) -> U64Vec4
fn yxzz(self) -> U64Vec4
fn yxzw(self) -> U64Vec4
fn yxwx(self) -> U64Vec4
fn yxwy(self) -> U64Vec4
fn yxwz(self) -> U64Vec4
fn yxww(self) -> U64Vec4
fn yyxx(self) -> U64Vec4
fn yyxy(self) -> U64Vec4
fn yyxz(self) -> U64Vec4
fn yyxw(self) -> U64Vec4
fn yyyx(self) -> U64Vec4
fn yyyy(self) -> U64Vec4
fn yyyz(self) -> U64Vec4
fn yyyw(self) -> U64Vec4
fn yyzx(self) -> U64Vec4
fn yyzy(self) -> U64Vec4
fn yyzz(self) -> U64Vec4
fn yyzw(self) -> U64Vec4
fn yywx(self) -> U64Vec4
fn yywy(self) -> U64Vec4
fn yywz(self) -> U64Vec4
fn yyww(self) -> U64Vec4
fn yzxx(self) -> U64Vec4
fn yzxy(self) -> U64Vec4
fn yzxz(self) -> U64Vec4
fn yzxw(self) -> U64Vec4
fn yzyx(self) -> U64Vec4
fn yzyy(self) -> U64Vec4
fn yzyz(self) -> U64Vec4
fn yzyw(self) -> U64Vec4
fn yzzx(self) -> U64Vec4
fn yzzy(self) -> U64Vec4
fn yzzz(self) -> U64Vec4
fn yzzw(self) -> U64Vec4
fn yzwx(self) -> U64Vec4
fn yzwy(self) -> U64Vec4
fn yzwz(self) -> U64Vec4
fn yzww(self) -> U64Vec4
fn ywxx(self) -> U64Vec4
fn ywxy(self) -> U64Vec4
fn ywxz(self) -> U64Vec4
fn ywxw(self) -> U64Vec4
fn ywyx(self) -> U64Vec4
fn ywyy(self) -> U64Vec4
fn ywyz(self) -> U64Vec4
fn ywyw(self) -> U64Vec4
fn ywzx(self) -> U64Vec4
fn ywzy(self) -> U64Vec4
fn ywzz(self) -> U64Vec4
fn ywzw(self) -> U64Vec4
fn ywwx(self) -> U64Vec4
fn ywwy(self) -> U64Vec4
fn ywwz(self) -> U64Vec4
fn ywww(self) -> U64Vec4
fn zxxx(self) -> U64Vec4
fn zxxy(self) -> U64Vec4
fn zxxz(self) -> U64Vec4
fn zxxw(self) -> U64Vec4
fn zxyx(self) -> U64Vec4
fn zxyy(self) -> U64Vec4
fn zxyz(self) -> U64Vec4
fn zxyw(self) -> U64Vec4
fn zxzx(self) -> U64Vec4
fn zxzy(self) -> U64Vec4
fn zxzz(self) -> U64Vec4
fn zxzw(self) -> U64Vec4
fn zxwx(self) -> U64Vec4
fn zxwy(self) -> U64Vec4
fn zxwz(self) -> U64Vec4
fn zxww(self) -> U64Vec4
fn zyxx(self) -> U64Vec4
fn zyxy(self) -> U64Vec4
fn zyxz(self) -> U64Vec4
fn zyxw(self) -> U64Vec4
fn zyyx(self) -> U64Vec4
fn zyyy(self) -> U64Vec4
fn zyyz(self) -> U64Vec4
fn zyyw(self) -> U64Vec4
fn zyzx(self) -> U64Vec4
fn zyzy(self) -> U64Vec4
fn zyzz(self) -> U64Vec4
fn zyzw(self) -> U64Vec4
fn zywx(self) -> U64Vec4
fn zywy(self) -> U64Vec4
fn zywz(self) -> U64Vec4
fn zyww(self) -> U64Vec4
fn zzxx(self) -> U64Vec4
fn zzxy(self) -> U64Vec4
fn zzxz(self) -> U64Vec4
fn zzxw(self) -> U64Vec4
fn zzyx(self) -> U64Vec4
fn zzyy(self) -> U64Vec4
fn zzyz(self) -> U64Vec4
fn zzyw(self) -> U64Vec4
fn zzzx(self) -> U64Vec4
fn zzzy(self) -> U64Vec4
fn zzzz(self) -> U64Vec4
fn zzzw(self) -> U64Vec4
fn zzwx(self) -> U64Vec4
fn zzwy(self) -> U64Vec4
fn zzwz(self) -> U64Vec4
fn zzww(self) -> U64Vec4
fn zwxx(self) -> U64Vec4
fn zwxy(self) -> U64Vec4
fn zwxz(self) -> U64Vec4
fn zwxw(self) -> U64Vec4
fn zwyx(self) -> U64Vec4
fn zwyy(self) -> U64Vec4
fn zwyz(self) -> U64Vec4
fn zwyw(self) -> U64Vec4
fn zwzx(self) -> U64Vec4
fn zwzy(self) -> U64Vec4
fn zwzz(self) -> U64Vec4
fn zwzw(self) -> U64Vec4
fn zwwx(self) -> U64Vec4
fn zwwy(self) -> U64Vec4
fn zwwz(self) -> U64Vec4
fn zwww(self) -> U64Vec4
fn wxxx(self) -> U64Vec4
fn wxxy(self) -> U64Vec4
fn wxxz(self) -> U64Vec4
fn wxxw(self) -> U64Vec4
fn wxyx(self) -> U64Vec4
fn wxyy(self) -> U64Vec4
fn wxyz(self) -> U64Vec4
fn wxyw(self) -> U64Vec4
fn wxzx(self) -> U64Vec4
fn wxzy(self) -> U64Vec4
fn wxzz(self) -> U64Vec4
fn wxzw(self) -> U64Vec4
fn wxwx(self) -> U64Vec4
fn wxwy(self) -> U64Vec4
fn wxwz(self) -> U64Vec4
fn wxww(self) -> U64Vec4
fn wyxx(self) -> U64Vec4
fn wyxy(self) -> U64Vec4
fn wyxz(self) -> U64Vec4
fn wyxw(self) -> U64Vec4
fn wyyx(self) -> U64Vec4
fn wyyy(self) -> U64Vec4
fn wyyz(self) -> U64Vec4
fn wyyw(self) -> U64Vec4
fn wyzx(self) -> U64Vec4
fn wyzy(self) -> U64Vec4
fn wyzz(self) -> U64Vec4
fn wyzw(self) -> U64Vec4
fn wywx(self) -> U64Vec4
fn wywy(self) -> U64Vec4
fn wywz(self) -> U64Vec4
fn wyww(self) -> U64Vec4
fn wzxx(self) -> U64Vec4
fn wzxy(self) -> U64Vec4
fn wzxz(self) -> U64Vec4
fn wzxw(self) -> U64Vec4
fn wzyx(self) -> U64Vec4
fn wzyy(self) -> U64Vec4
fn wzyz(self) -> U64Vec4
fn wzyw(self) -> U64Vec4
fn wzzx(self) -> U64Vec4
fn wzzy(self) -> U64Vec4
fn wzzz(self) -> U64Vec4
fn wzzw(self) -> U64Vec4
fn wzwx(self) -> U64Vec4
fn wzwy(self) -> U64Vec4
fn wzwz(self) -> U64Vec4
fn wzww(self) -> U64Vec4
fn wwxx(self) -> U64Vec4
fn wwxy(self) -> U64Vec4
fn wwxz(self) -> U64Vec4
fn wwxw(self) -> U64Vec4
fn wwyx(self) -> U64Vec4
fn wwyy(self) -> U64Vec4
fn wwyz(self) -> U64Vec4
fn wwyw(self) -> U64Vec4
fn wwzx(self) -> U64Vec4
fn wwzy(self) -> U64Vec4
fn wwzz(self) -> U64Vec4
fn wwzw(self) -> U64Vec4
fn wwwx(self) -> U64Vec4
fn wwwy(self) -> U64Vec4
fn wwwz(self) -> U64Vec4
fn wwww(self) -> U64Vec4
fn xyzw(self) -> Self
impl Copy for U64Vec4
impl Eq for U64Vec4
impl StructuralPartialEq for U64Vec4
Auto Trait Implementations§
impl Freeze for U64Vec4
impl RefUnwindSafe for U64Vec4
impl Send for U64Vec4
impl Sync for U64Vec4
impl Unpin for U64Vec4
impl UnwindSafe for U64Vec4
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
source§unsafe fn clone_to_uninit(&self, dst: *mut T)
unsafe fn clone_to_uninit(&self, dst: *mut T)
clone_to_uninit
)