Public Unsafe APIs — nightly (rustc 1.97.0-nightly (9ec5d5f32 2026-04-21))

Generated from crates: core, alloc, std.

IndexModule PathAPI NameKindSafety Doc Mark
1alloc::allocallocfunctionSee [`GlobalAlloc::alloc`].
2alloc::allocalloc_zeroedfunctionSee [`GlobalAlloc::alloc_zeroed`].
3alloc::allocdeallocfunctionSee [`GlobalAlloc::dealloc`].
4alloc::allocreallocfunctionSee [`GlobalAlloc::realloc`].
5alloc::boxed::Boxassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the values really are in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
6alloc::boxed::Boxdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
7alloc::boxed::Boxfrom_non_nullfunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same `NonNull` pointer. The non-null pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section. Note that the [considerations for unsafe code] apply to all `Box<T>` values.
8alloc::boxed::Boxfrom_non_null_infunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The non-null pointer must point to a block of memory allocated by `alloc`. The safety conditions are described in the [memory layout] section. Note that the [considerations for unsafe code] apply to all `Box<T, A>` values.
9alloc::boxed::Boxfrom_rawfunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section. Note that the [considerations for unsafe code] apply to all `Box<T>` values.
10alloc::boxed::Boxfrom_raw_infunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by `alloc`. The safety conditions are described in the [memory layout] section. Note that the [considerations for unsafe code] apply to all `Box<T, A>` values.
11alloc::collections::binary_heap::BinaryHeapas_mut_slicefunctionThe caller must ensure that the slice remains a max-heap, i.e. for all indices `0 < i < slice.len()`, `slice[(i - 1) / 2] >= slice[i]`, before the borrow ends and the binary heap is used.
12alloc::collections::binary_heap::BinaryHeapfrom_raw_vecfunctionThe supplied `vec` must be a max-heap, i.e. for all indices `0 < i < vec.len()`, `vec[(i - 1) / 2] >= vec[i]`.
13alloc::collections::btree::map::CursorMutinsert_after_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
14alloc::collections::btree::map::CursorMutinsert_before_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
15alloc::collections::btree::map::CursorMutwith_mutable_keyfunctionSince this cursor allows mutating keys, you must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
16alloc::collections::btree::map::CursorMutKeyinsert_after_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
17alloc::collections::btree::map::CursorMutKeyinsert_before_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
18alloc::collections::btree::set::CursorMutinsert_after_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
19alloc::collections::btree::set::CursorMutinsert_before_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
20alloc::collections::btree::set::CursorMutwith_mutable_keyfunctionSince this cursor allows mutating elements, you must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
21alloc::collections::btree::set::CursorMutKeyinsert_after_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
22alloc::collections::btree::set::CursorMutKeyinsert_before_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
23alloc::ffi::c_str::CStringfrom_rawfunctionThis should only ever be called with a pointer that was earlier obtained by calling [`CString::into_raw`], and the memory it points to must not be accessed through any other pointer during the lifetime of reconstructed `CString`. Other usage (e.g., trying to take ownership of a string that was allocated by foreign code) is likely to lead to undefined behavior or allocator corruption. This function does not validate ownership of the raw pointer's memory. A double-free may occur if the function is called twice on the same raw pointer. Additionally, the caller must ensure the pointer is not dangling. It should be noted that the length isn't just "recomputed," but that the recomputed length must match the original length from the [`CString::into_raw`] call. This means the [`CString::into_raw`]/`from_raw` methods should not be used when passing the string to C functions that can modify the string's length. > **Note:** If you need to borrow a string that was allocated by > foreign code, use [`CStr`]. If you need to take ownership of > a string that was allocated by foreign code, you will need to > make your own provisions for freeing it appropriately, likely > with the foreign code's API to do that.
24alloc::ffi::c_str::CStringfrom_vec_uncheckedfunctionThe caller must ensure `v` contains no nul bytes in its contents.
25alloc::ffi::c_str::CStringfrom_vec_with_nul_uncheckedfunctionThe given [`Vec`] **must** have one nul byte as its last element. This means it cannot be empty nor have any other nul byte anywhere else.
26alloc::rc::Rcassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
27alloc::rc::Rcdecrement_strong_countfunctionThe pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in
28alloc::rc::Rcdecrement_strong_count_infunctionThe pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in
29alloc::rc::Rcdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
30alloc::rc::Rcfrom_rawfunction* Creating a `Rc<T>` from a pointer other than one returned from [`Rc<U>::into_raw`][into_raw] or [`Rc<U>::into_raw_with_allocator`][into_raw_with_allocator] is undefined behavior. * If `U` is sized, it must have the same size and alignment as `T`. This is trivially true if `U` is `T`. * If `U` is unsized, its data pointer must have the same size and alignment as `T`. This is trivially true if `Rc<U>` was constructed through `Rc<T>` and then converted to `Rc<U>` through an [unsized coercion]. * Note that if `U` or `U`'s data pointer is not `T` but has the same size and alignment, this is basically like transmuting references of different types. See [`mem::transmute`][transmute] for more information on what restrictions apply in this case. * The raw pointer must point to a block of memory allocated by the global allocator * The user of `from_raw` has to make sure a specific value of `T` is only dropped once. This function is unsafe because improper use may lead to memory unsafety, even if the returned `Rc<T>` is never accessed. [into_raw]: Rc::into_raw [into_raw_with_allocator]: Rc::into_raw_with_allocator [transmute]: core::mem::transmute [unsized coercion]: https://doc.rust-lang.org/reference/type-coercions.html#unsized-coercions
31alloc::rc::Rcfrom_raw_infunction* Creating a `Rc<T, A>` from a pointer other than one returned from [`Rc<U, A>::into_raw`][into_raw] or [`Rc<U, A>::into_raw_with_allocator`][into_raw_with_allocator] is undefined behavior. * If `U` is sized, it must have the same size and alignment as `T`. This is trivially true if `U` is `T`. * If `U` is unsized, its data pointer must have the same size and alignment as `T`. This is trivially true if `Rc<U, A>` was constructed through `Rc<T, A>` and then converted to `Rc<U, A>` through an [unsized coercion]. * Note that if `U` or `U`'s data pointer is not `T` but has the same size and alignment, this is basically like transmuting references of different types. See [`mem::transmute`][transmute] for more information on what restrictions apply in this case. * The raw pointer must point to a block of memory allocated by `alloc` * The user of `from_raw` has to make sure a specific value of `T` is only dropped once. This function is unsafe because improper use may lead to memory unsafety, even if the returned `Rc<T, A>` is never accessed. [into_raw]: Rc::into_raw [into_raw_with_allocator]: Rc::into_raw_with_allocator [transmute]: core::mem::transmute [unsized coercion]: https://doc.rust-lang.org/reference/type-coercions.html#unsized-coercions
32alloc::rc::Rcget_mut_uncheckedfunctionIf any other `Rc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Rc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Rc::new`.
33alloc::rc::Rcincrement_strong_countfunctionThe pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Rc::from_raw_in
34alloc::rc::Rcincrement_strong_count_infunctionThe pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Rc::from_raw_in
35alloc::rc::Weakfrom_rawfunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by the global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
36alloc::rc::Weakfrom_raw_infunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
37alloc::strfrom_boxed_utf8_uncheckedfunction* The provided bytes must contain a valid UTF-8 sequence.
38alloc::string::Stringas_mut_vecfunctionThis function is unsafe because the returned `&mut Vec` allows writing bytes which are not valid UTF-8. If this constraint is violated, using the original `String` after dropping the `&mut Vec` may violate memory safety, as the rest of the standard library assumes that `String`s are valid UTF-8.
39alloc::string::Stringfrom_raw_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * all safety requirements for [`Vec::<u8>::from_raw_parts`]. * all safety requirements for [`String::from_utf8_unchecked`]. Violating these may cause problems like corrupting the allocator's internal data structures. For example, it is normally **not** safe to build a `String` from a pointer to a C `char` array containing UTF-8 _unless_ you are certain that array was originally allocated by the Rust standard library's allocator. The ownership of `buf` is effectively transferred to the `String` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function.
40alloc::string::Stringfrom_utf8_uncheckedfunctionThis function is unsafe because it does not check that the bytes passed to it are valid UTF-8. If this constraint is violated, it may cause memory unsafety issues with future users of the `String`, as the rest of the standard library assumes that `String`s are valid UTF-8.
41alloc::sync::Arcassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
42alloc::sync::Arcdecrement_strong_countfunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in
43alloc::sync::Arcdecrement_strong_count_infunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in
44alloc::sync::Arcdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
45alloc::sync::Arcfrom_rawfunction* Creating a `Arc<T>` from a pointer other than one returned from [`Arc<U>::into_raw`][into_raw] or [`Arc<U>::into_raw_with_allocator`][into_raw_with_allocator] is undefined behavior. * If `U` is sized, it must have the same size and alignment as `T`. This is trivially true if `U` is `T`. * If `U` is unsized, its data pointer must have the same size and alignment as `T`. This is trivially true if `Arc<U>` was constructed through `Arc<T>` and then converted to `Arc<U>` through an [unsized coercion]. * Note that if `U` or `U`'s data pointer is not `T` but has the same size and alignment, this is basically like transmuting references of different types. See [`mem::transmute`][transmute] for more information on what restrictions apply in this case. * The raw pointer must point to a block of memory allocated by the global allocator. * The user of `from_raw` has to make sure a specific value of `T` is only dropped once. This function is unsafe because improper use may lead to memory unsafety, even if the returned `Arc<T>` is never accessed. [into_raw]: Arc::into_raw [into_raw_with_allocator]: Arc::into_raw_with_allocator [transmute]: core::mem::transmute [unsized coercion]: https://doc.rust-lang.org/reference/type-coercions.html#unsized-coercions
46alloc::sync::Arcfrom_raw_infunction* Creating a `Arc<T, A>` from a pointer other than one returned from [`Arc<U, A>::into_raw`][into_raw] or [`Arc<U, A>::into_raw_with_allocator`][into_raw_with_allocator] is undefined behavior. * If `U` is sized, it must have the same size and alignment as `T`. This is trivially true if `U` is `T`. * If `U` is unsized, its data pointer must have the same size and alignment as `T`. This is trivially true if `Arc<U, A>` was constructed through `Arc<T, A>` and then converted to `Arc<U, A>` through an [unsized coercion]. * Note that if `U` or `U`'s data pointer is not `T` but has the same size and alignment, this is basically like transmuting references of different types. See [`mem::transmute`][transmute] for more information on what restrictions apply in this case. * The raw pointer must point to a block of memory allocated by `alloc` * The user of `from_raw` has to make sure a specific value of `T` is only dropped once. This function is unsafe because improper use may lead to memory unsafety, even if the returned `Arc<T>` is never accessed. [into_raw]: Arc::into_raw [into_raw_with_allocator]: Arc::into_raw_with_allocator [transmute]: core::mem::transmute [unsized coercion]: https://doc.rust-lang.org/reference/type-coercions.html#unsized-coercions
47alloc::sync::Arcget_mut_uncheckedfunctionIf any other `Arc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Arc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Arc::new`.
48alloc::sync::Arcincrement_strong_countfunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Arc::from_raw_in
49alloc::sync::Arcincrement_strong_count_infunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Arc::from_raw_in
50alloc::sync::Weakfrom_rawfunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
51alloc::sync::Weakfrom_raw_infunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
52alloc::vec::Vecfrom_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`NonNull::slice_from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
53alloc::vec::Vecfrom_parts_infunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting
54alloc::vec::Vecfrom_raw_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * If `T` is not a zero-sized type and the capacity is nonzero, `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. If `T` is a zero-sized type or the capacity is zero, `ptr` need only be non-null and aligned. * `T` needs to have the same alignment as what `ptr` was allocated with, if the pointer is required to be allocated. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes), if nonzero, needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with, if the pointer is required to be allocated. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`slice::from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
55alloc::vec::Vecfrom_raw_parts_infunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting
56alloc::vec::Vecset_lenfunction- `new_len` must be less than or equal to [`capacity()`]. - The elements at `old_len..new_len` must be initialized. [`capacity()`]: Vec::capacity
57core::allocAllocatortraitMemory blocks that are [*currently allocated*] by an allocator, must point to valid memory, and retain their validity until either: - the memory block is deallocated, or - the allocator is dropped. Copying, cloning, or moving the allocator must not invalidate memory blocks returned from it. A copied or cloned allocator must behave like the original allocator. A memory block which is [*currently allocated*] may be passed to any method of the allocator that accepts such an argument. Additionally, any memory block returned by the allocator must satisfy the allocation invariants described in `core::ptr`. In particular, if a block has base address `p` and size `n`, then `p as usize + n <= usize::MAX` must hold. This ensures that pointer arithmetic within the allocation (for example, `ptr.add(len)`) cannot overflow the address space. [*currently allocated*]: #currently-allocated-memory
58core::allocGlobalAlloctraitThe `GlobalAlloc` trait is an `unsafe` trait for a number of reasons, and implementors must ensure that they adhere to these contracts: * It's undefined behavior if global allocators unwind. This restriction may be lifted in the future, but currently a panic from any of these functions may lead to memory unsafety. * `Layout` queries and calculations in general must be correct. Callers of this trait are allowed to rely on the contracts defined on each method, and implementors must ensure such contracts remain true. * You must not rely on allocations actually happening, even if there are explicit heap allocations in the source. The optimizer may detect unused allocations that it can either eliminate entirely or move to the stack and thus never invoke the allocator. The optimizer may further assume that allocation is infallible, so code that used to fail due to allocator failures may now suddenly work because the optimizer worked around the need for an allocation. More concretely, the following code example is unsound, irrespective of whether your custom allocator allows counting how many allocations have happened. ```rust,ignore (unsound and has placeholders) drop(Box::new(42)); let number_of_heap_allocs = /* call private allocator API */; unsafe { std::hint::assert_unchecked(number_of_heap_allocs > 0); } ``` Note that the optimizations mentioned above are not the only optimization that can be applied. You may generally not rely on heap allocations happening if they can be removed without changing program behavior. Whether allocations happen or not is not part of the program behavior, even if it could be detected via an allocator that tracks allocations by printing or otherwise having side effects.
59core::alloc::layout::Layoutfor_value_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable for the type `T` acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Layout::for_value`] on a reference to an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
60core::alloc::layout::Layoutfrom_size_align_uncheckedfunctionThis function is unsafe as it does not verify the preconditions from [`Layout::from_size_align`].
61core::alloc::layout::Layoutfrom_size_alignment_uncheckedfunctionThis function is unsafe as it does not verify the preconditions from [`Layout::from_size_alignment`].
62core::arrayas_ascii_uncheckedfunctionEvery byte in the array must be in `0..=127`, or else this is UB.
63core::array::iter::IntoIternew_uncheckedfunction- The `buffer[initialized]` elements must all be initialized. - The range must be canonical, with `initialized.start <= initialized.end`. - The range must be in-bounds for the buffer, with `initialized.end <= N`. (Like how indexing `[0][100..100]` fails despite the range being empty.) It's sound to have more elements initialized than mentioned, though that will most likely result in them being leaked.
64core::ascii::ascii_char::AsciiChardigit_uncheckedfunctionThis is immediate UB if called with `d > 64`. If `d >= 10` and `d <= 64`, this is allowed to return any value or panic. Notably, it should not be expected to return hex digits, or any other reasonable extension of the decimal digits. (This loose safety condition is intended to simplify soundness proofs when writing code using this method, since the implementation doesn't need something really specific, not to make those other arguments do something useful. It might be tightened before stabilization.)
65core::ascii::ascii_char::AsciiCharfrom_u8_uncheckedfunction`b` must be in `0..=127`, or else this is UB.
66core::cellCloneFromCelltraitImplementing this trait for a type is sound if and only if the following code is sound for T = that type. ``` #![feature(cell_get_cloned)]
67core::cell::RefCelltry_borrow_unguardedfunctionUnlike `RefCell::borrow`, this method is unsafe because it does not return a `Ref`, thus leaving the borrow flag untouched. Mutably borrowing the `RefCell` while the reference returned by this method is alive is undefined behavior.
68core::cell::UnsafeCellas_mut_uncheckedfunction- It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive. - Mutating the wrapped value through other means while the returned reference is alive is Undefined Behavior.
69core::cell::UnsafeCellas_ref_uncheckedfunction- It is Undefined Behavior to call this while any mutable reference to the wrapped value is alive. - Mutating the wrapped value while the returned reference is alive is Undefined Behavior.
70core::cell::UnsafeCellreplacefunctionThe caller must take care to avoid aliasing and data races. - It is Undefined Behavior to allow calls to race with any other access to the wrapped value. - It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive.
71core::charas_ascii_uncheckedfunctionThis char must be within the ASCII range, or else this is UB.
72core::charfrom_u32_uncheckedfunctionThis function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32
73core::cloneCloneToUninittraitImplementations must ensure that when `.clone_to_uninit(dest)` returns normally rather than panicking, it always leaves `*dest` initialized as a valid value of type `Self`.
74core::cloneTrivialClonetrait`Clone::clone` must be equivalent to copying the value, otherwise calling functions such as `slice::clone_from_slice` can have undefined behaviour.
75core::core_arch::aarch64::mte__arm_mte_create_random_tagfunction
76core::core_arch::aarch64::mte__arm_mte_exclude_tagfunction
77core::core_arch::aarch64::mte__arm_mte_get_tagfunction
78core::core_arch::aarch64::mte__arm_mte_increment_tagfunction
79core::core_arch::aarch64::mte__arm_mte_ptrdifffunction
80core::core_arch::aarch64::mte__arm_mte_set_tagfunction
81core::core_arch::aarch64::neonvld1_dup_f64function
82core::core_arch::aarch64::neonvld1_lane_f64function
83core::core_arch::aarch64::neonvld1q_dup_f64function
84core::core_arch::aarch64::neonvld1q_lane_f64function
85core::core_arch::aarch64::neon::generatedvld1_f16function* Neon intrinsic unsafe
86core::core_arch::aarch64::neon::generatedvld1_f32function* Neon intrinsic unsafe
87core::core_arch::aarch64::neon::generatedvld1_f64function* Neon intrinsic unsafe
88core::core_arch::aarch64::neon::generatedvld1_f64_x2function* Neon intrinsic unsafe
89core::core_arch::aarch64::neon::generatedvld1_f64_x3function* Neon intrinsic unsafe
90core::core_arch::aarch64::neon::generatedvld1_f64_x4function* Neon intrinsic unsafe
91core::core_arch::aarch64::neon::generatedvld1_p16function* Neon intrinsic unsafe
92core::core_arch::aarch64::neon::generatedvld1_p64function* Neon intrinsic unsafe
93core::core_arch::aarch64::neon::generatedvld1_p8function* Neon intrinsic unsafe
94core::core_arch::aarch64::neon::generatedvld1_s16function* Neon intrinsic unsafe
95core::core_arch::aarch64::neon::generatedvld1_s32function* Neon intrinsic unsafe
96core::core_arch::aarch64::neon::generatedvld1_s64function* Neon intrinsic unsafe
97core::core_arch::aarch64::neon::generatedvld1_s8function* Neon intrinsic unsafe
98core::core_arch::aarch64::neon::generatedvld1_u16function* Neon intrinsic unsafe
99core::core_arch::aarch64::neon::generatedvld1_u32function* Neon intrinsic unsafe
100core::core_arch::aarch64::neon::generatedvld1_u64function* Neon intrinsic unsafe
101core::core_arch::aarch64::neon::generatedvld1_u8function* Neon intrinsic unsafe
102core::core_arch::aarch64::neon::generatedvld1q_f16function* Neon intrinsic unsafe
103core::core_arch::aarch64::neon::generatedvld1q_f32function* Neon intrinsic unsafe
104core::core_arch::aarch64::neon::generatedvld1q_f64function* Neon intrinsic unsafe
105core::core_arch::aarch64::neon::generatedvld1q_f64_x2function* Neon intrinsic unsafe
106core::core_arch::aarch64::neon::generatedvld1q_f64_x3function* Neon intrinsic unsafe
107core::core_arch::aarch64::neon::generatedvld1q_f64_x4function* Neon intrinsic unsafe
108core::core_arch::aarch64::neon::generatedvld1q_p16function* Neon intrinsic unsafe
109core::core_arch::aarch64::neon::generatedvld1q_p64function* Neon intrinsic unsafe
110core::core_arch::aarch64::neon::generatedvld1q_p8function* Neon intrinsic unsafe
111core::core_arch::aarch64::neon::generatedvld1q_s16function* Neon intrinsic unsafe
112core::core_arch::aarch64::neon::generatedvld1q_s32function* Neon intrinsic unsafe
113core::core_arch::aarch64::neon::generatedvld1q_s64function* Neon intrinsic unsafe
114core::core_arch::aarch64::neon::generatedvld1q_s8function* Neon intrinsic unsafe
115core::core_arch::aarch64::neon::generatedvld1q_u16function* Neon intrinsic unsafe
116core::core_arch::aarch64::neon::generatedvld1q_u32function* Neon intrinsic unsafe
117core::core_arch::aarch64::neon::generatedvld1q_u64function* Neon intrinsic unsafe
118core::core_arch::aarch64::neon::generatedvld1q_u8function* Neon intrinsic unsafe
119core::core_arch::aarch64::neon::generatedvld2_dup_f64function* Neon intrinsic unsafe
120core::core_arch::aarch64::neon::generatedvld2_f64function* Neon intrinsic unsafe
121core::core_arch::aarch64::neon::generatedvld2_lane_f64function* Neon intrinsic unsafe
122core::core_arch::aarch64::neon::generatedvld2_lane_p64function* Neon intrinsic unsafe
123core::core_arch::aarch64::neon::generatedvld2_lane_s64function* Neon intrinsic unsafe
124core::core_arch::aarch64::neon::generatedvld2_lane_u64function* Neon intrinsic unsafe
125core::core_arch::aarch64::neon::generatedvld2q_dup_f64function* Neon intrinsic unsafe
126core::core_arch::aarch64::neon::generatedvld2q_dup_p64function* Neon intrinsic unsafe
127core::core_arch::aarch64::neon::generatedvld2q_dup_s64function* Neon intrinsic unsafe
128core::core_arch::aarch64::neon::generatedvld2q_dup_u64function* Neon intrinsic unsafe
129core::core_arch::aarch64::neon::generatedvld2q_f64function* Neon intrinsic unsafe
130core::core_arch::aarch64::neon::generatedvld2q_lane_f64function* Neon intrinsic unsafe
131core::core_arch::aarch64::neon::generatedvld2q_lane_p64function* Neon intrinsic unsafe
132core::core_arch::aarch64::neon::generatedvld2q_lane_p8function* Neon intrinsic unsafe
133core::core_arch::aarch64::neon::generatedvld2q_lane_s64function* Neon intrinsic unsafe
134core::core_arch::aarch64::neon::generatedvld2q_lane_s8function* Neon intrinsic unsafe
135core::core_arch::aarch64::neon::generatedvld2q_lane_u64function* Neon intrinsic unsafe
136core::core_arch::aarch64::neon::generatedvld2q_lane_u8function* Neon intrinsic unsafe
137core::core_arch::aarch64::neon::generatedvld2q_p64function* Neon intrinsic unsafe
138core::core_arch::aarch64::neon::generatedvld2q_s64function* Neon intrinsic unsafe
139core::core_arch::aarch64::neon::generatedvld2q_u64function* Neon intrinsic unsafe
140core::core_arch::aarch64::neon::generatedvld3_dup_f64function* Neon intrinsic unsafe
141core::core_arch::aarch64::neon::generatedvld3_f64function* Neon intrinsic unsafe
142core::core_arch::aarch64::neon::generatedvld3_lane_f64function* Neon intrinsic unsafe
143core::core_arch::aarch64::neon::generatedvld3_lane_p64function* Neon intrinsic unsafe
144core::core_arch::aarch64::neon::generatedvld3_lane_s64function* Neon intrinsic unsafe
145core::core_arch::aarch64::neon::generatedvld3_lane_u64function* Neon intrinsic unsafe
146core::core_arch::aarch64::neon::generatedvld3q_dup_f64function* Neon intrinsic unsafe
147core::core_arch::aarch64::neon::generatedvld3q_dup_p64function* Neon intrinsic unsafe
148core::core_arch::aarch64::neon::generatedvld3q_dup_s64function* Neon intrinsic unsafe
149core::core_arch::aarch64::neon::generatedvld3q_dup_u64function* Neon intrinsic unsafe
150core::core_arch::aarch64::neon::generatedvld3q_f64function* Neon intrinsic unsafe
151core::core_arch::aarch64::neon::generatedvld3q_lane_f64function* Neon intrinsic unsafe
152core::core_arch::aarch64::neon::generatedvld3q_lane_p64function* Neon intrinsic unsafe
153core::core_arch::aarch64::neon::generatedvld3q_lane_p8function* Neon intrinsic unsafe
154core::core_arch::aarch64::neon::generatedvld3q_lane_s64function* Neon intrinsic unsafe
155core::core_arch::aarch64::neon::generatedvld3q_lane_s8function* Neon intrinsic unsafe
156core::core_arch::aarch64::neon::generatedvld3q_lane_u64function* Neon intrinsic unsafe
157core::core_arch::aarch64::neon::generatedvld3q_lane_u8function* Neon intrinsic unsafe
158core::core_arch::aarch64::neon::generatedvld3q_p64function* Neon intrinsic unsafe
159core::core_arch::aarch64::neon::generatedvld3q_s64function* Neon intrinsic unsafe
160core::core_arch::aarch64::neon::generatedvld3q_u64function* Neon intrinsic unsafe
161core::core_arch::aarch64::neon::generatedvld4_dup_f64function* Neon intrinsic unsafe
162core::core_arch::aarch64::neon::generatedvld4_f64function* Neon intrinsic unsafe
163core::core_arch::aarch64::neon::generatedvld4_lane_f64function* Neon intrinsic unsafe
164core::core_arch::aarch64::neon::generatedvld4_lane_p64function* Neon intrinsic unsafe
165core::core_arch::aarch64::neon::generatedvld4_lane_s64function* Neon intrinsic unsafe
166core::core_arch::aarch64::neon::generatedvld4_lane_u64function* Neon intrinsic unsafe
167core::core_arch::aarch64::neon::generatedvld4q_dup_f64function* Neon intrinsic unsafe
168core::core_arch::aarch64::neon::generatedvld4q_dup_p64function* Neon intrinsic unsafe
169core::core_arch::aarch64::neon::generatedvld4q_dup_s64function* Neon intrinsic unsafe
170core::core_arch::aarch64::neon::generatedvld4q_dup_u64function* Neon intrinsic unsafe
171core::core_arch::aarch64::neon::generatedvld4q_f64function* Neon intrinsic unsafe
172core::core_arch::aarch64::neon::generatedvld4q_lane_f64function* Neon intrinsic unsafe
173core::core_arch::aarch64::neon::generatedvld4q_lane_p64function* Neon intrinsic unsafe
174core::core_arch::aarch64::neon::generatedvld4q_lane_p8function* Neon intrinsic unsafe
175core::core_arch::aarch64::neon::generatedvld4q_lane_s64function* Neon intrinsic unsafe
176core::core_arch::aarch64::neon::generatedvld4q_lane_s8function* Neon intrinsic unsafe
177core::core_arch::aarch64::neon::generatedvld4q_lane_u64function* Neon intrinsic unsafe
178core::core_arch::aarch64::neon::generatedvld4q_lane_u8function* Neon intrinsic unsafe
179core::core_arch::aarch64::neon::generatedvld4q_p64function* Neon intrinsic unsafe
180core::core_arch::aarch64::neon::generatedvld4q_s64function* Neon intrinsic unsafe
181core::core_arch::aarch64::neon::generatedvld4q_u64function* Neon intrinsic unsafe
182core::core_arch::aarch64::neon::generatedvldap1_lane_p64function* Neon intrinsic unsafe
183core::core_arch::aarch64::neon::generatedvldap1_lane_s64function* Neon intrinsic unsafe
184core::core_arch::aarch64::neon::generatedvldap1_lane_u64function* Neon intrinsic unsafe
185core::core_arch::aarch64::neon::generatedvldap1q_lane_f64function* Neon intrinsic unsafe
186core::core_arch::aarch64::neon::generatedvldap1q_lane_p64function* Neon intrinsic unsafe
187core::core_arch::aarch64::neon::generatedvldap1q_lane_s64function* Neon intrinsic unsafe
188core::core_arch::aarch64::neon::generatedvldap1q_lane_u64function* Neon intrinsic unsafe
189core::core_arch::aarch64::neon::generatedvluti2_lane_f16function* Neon intrinsic unsafe
190core::core_arch::aarch64::neon::generatedvluti2_lane_p16function* Neon intrinsic unsafe
191core::core_arch::aarch64::neon::generatedvluti2_lane_p8function* Neon intrinsic unsafe
192core::core_arch::aarch64::neon::generatedvluti2_lane_s16function* Neon intrinsic unsafe
193core::core_arch::aarch64::neon::generatedvluti2_lane_s8function* Neon intrinsic unsafe
194core::core_arch::aarch64::neon::generatedvluti2_lane_u16function* Neon intrinsic unsafe
195core::core_arch::aarch64::neon::generatedvluti2_lane_u8function* Neon intrinsic unsafe
196core::core_arch::aarch64::neon::generatedvluti2_laneq_f16function* Neon intrinsic unsafe
197core::core_arch::aarch64::neon::generatedvluti2_laneq_p16function* Neon intrinsic unsafe
198core::core_arch::aarch64::neon::generatedvluti2_laneq_p8function* Neon intrinsic unsafe
199core::core_arch::aarch64::neon::generatedvluti2_laneq_s16function* Neon intrinsic unsafe
200core::core_arch::aarch64::neon::generatedvluti2_laneq_s8function* Neon intrinsic unsafe
201core::core_arch::aarch64::neon::generatedvluti2_laneq_u16function* Neon intrinsic unsafe
202core::core_arch::aarch64::neon::generatedvluti2_laneq_u8function* Neon intrinsic unsafe
203core::core_arch::aarch64::neon::generatedvluti2q_lane_f16function* Neon intrinsic unsafe
204core::core_arch::aarch64::neon::generatedvluti2q_lane_p16function* Neon intrinsic unsafe
205core::core_arch::aarch64::neon::generatedvluti2q_lane_p8function* Neon intrinsic unsafe
206core::core_arch::aarch64::neon::generatedvluti2q_lane_s16function* Neon intrinsic unsafe
207core::core_arch::aarch64::neon::generatedvluti2q_lane_s8function* Neon intrinsic unsafe
208core::core_arch::aarch64::neon::generatedvluti2q_lane_u16function* Neon intrinsic unsafe
209core::core_arch::aarch64::neon::generatedvluti2q_lane_u8function* Neon intrinsic unsafe
210core::core_arch::aarch64::neon::generatedvluti2q_laneq_f16function* Neon intrinsic unsafe
211core::core_arch::aarch64::neon::generatedvluti2q_laneq_p16function* Neon intrinsic unsafe
212core::core_arch::aarch64::neon::generatedvluti2q_laneq_p8function* Neon intrinsic unsafe
213core::core_arch::aarch64::neon::generatedvluti2q_laneq_s16function* Neon intrinsic unsafe
214core::core_arch::aarch64::neon::generatedvluti2q_laneq_s8function* Neon intrinsic unsafe
215core::core_arch::aarch64::neon::generatedvluti2q_laneq_u16function* Neon intrinsic unsafe
216core::core_arch::aarch64::neon::generatedvluti2q_laneq_u8function* Neon intrinsic unsafe
217core::core_arch::aarch64::neon::generatedvluti4q_lane_f16_x2function* Neon intrinsic unsafe
218core::core_arch::aarch64::neon::generatedvluti4q_lane_p16_x2function* Neon intrinsic unsafe
219core::core_arch::aarch64::neon::generatedvluti4q_lane_p8function* Neon intrinsic unsafe
220core::core_arch::aarch64::neon::generatedvluti4q_lane_s16_x2function* Neon intrinsic unsafe
221core::core_arch::aarch64::neon::generatedvluti4q_lane_s8function* Neon intrinsic unsafe
222core::core_arch::aarch64::neon::generatedvluti4q_lane_u16_x2function* Neon intrinsic unsafe
223core::core_arch::aarch64::neon::generatedvluti4q_lane_u8function* Neon intrinsic unsafe
224core::core_arch::aarch64::neon::generatedvluti4q_laneq_f16_x2function* Neon intrinsic unsafe
225core::core_arch::aarch64::neon::generatedvluti4q_laneq_p16_x2function* Neon intrinsic unsafe
226core::core_arch::aarch64::neon::generatedvluti4q_laneq_p8function* Neon intrinsic unsafe
227core::core_arch::aarch64::neon::generatedvluti4q_laneq_s16_x2function* Neon intrinsic unsafe
228core::core_arch::aarch64::neon::generatedvluti4q_laneq_s8function* Neon intrinsic unsafe
229core::core_arch::aarch64::neon::generatedvluti4q_laneq_u16_x2function* Neon intrinsic unsafe
230core::core_arch::aarch64::neon::generatedvluti4q_laneq_u8function* Neon intrinsic unsafe
231core::core_arch::aarch64::neon::generatedvst1_f16function* Neon intrinsic unsafe
232core::core_arch::aarch64::neon::generatedvst1_f32function* Neon intrinsic unsafe
233core::core_arch::aarch64::neon::generatedvst1_f64function* Neon intrinsic unsafe
234core::core_arch::aarch64::neon::generatedvst1_f64_x2function* Neon intrinsic unsafe
235core::core_arch::aarch64::neon::generatedvst1_f64_x3function* Neon intrinsic unsafe
236core::core_arch::aarch64::neon::generatedvst1_f64_x4function* Neon intrinsic unsafe
237core::core_arch::aarch64::neon::generatedvst1_lane_f64function* Neon intrinsic unsafe
238core::core_arch::aarch64::neon::generatedvst1_p16function* Neon intrinsic unsafe
239core::core_arch::aarch64::neon::generatedvst1_p64function* Neon intrinsic unsafe
240core::core_arch::aarch64::neon::generatedvst1_p8function* Neon intrinsic unsafe
241core::core_arch::aarch64::neon::generatedvst1_s16function* Neon intrinsic unsafe
242core::core_arch::aarch64::neon::generatedvst1_s32function* Neon intrinsic unsafe
243core::core_arch::aarch64::neon::generatedvst1_s64function* Neon intrinsic unsafe
244core::core_arch::aarch64::neon::generatedvst1_s8function* Neon intrinsic unsafe
245core::core_arch::aarch64::neon::generatedvst1_u16function* Neon intrinsic unsafe
246core::core_arch::aarch64::neon::generatedvst1_u32function* Neon intrinsic unsafe
247core::core_arch::aarch64::neon::generatedvst1_u64function* Neon intrinsic unsafe
248core::core_arch::aarch64::neon::generatedvst1_u8function* Neon intrinsic unsafe
249core::core_arch::aarch64::neon::generatedvst1q_f16function* Neon intrinsic unsafe
250core::core_arch::aarch64::neon::generatedvst1q_f32function* Neon intrinsic unsafe
251core::core_arch::aarch64::neon::generatedvst1q_f64function* Neon intrinsic unsafe
252core::core_arch::aarch64::neon::generatedvst1q_f64_x2function* Neon intrinsic unsafe
253core::core_arch::aarch64::neon::generatedvst1q_f64_x3function* Neon intrinsic unsafe
254core::core_arch::aarch64::neon::generatedvst1q_f64_x4function* Neon intrinsic unsafe
255core::core_arch::aarch64::neon::generatedvst1q_lane_f64function* Neon intrinsic unsafe
256core::core_arch::aarch64::neon::generatedvst1q_p16function* Neon intrinsic unsafe
257core::core_arch::aarch64::neon::generatedvst1q_p64function* Neon intrinsic unsafe
258core::core_arch::aarch64::neon::generatedvst1q_p8function* Neon intrinsic unsafe
259core::core_arch::aarch64::neon::generatedvst1q_s16function* Neon intrinsic unsafe
260core::core_arch::aarch64::neon::generatedvst1q_s32function* Neon intrinsic unsafe
261core::core_arch::aarch64::neon::generatedvst1q_s64function* Neon intrinsic unsafe
262core::core_arch::aarch64::neon::generatedvst1q_s8function* Neon intrinsic unsafe
263core::core_arch::aarch64::neon::generatedvst1q_u16function* Neon intrinsic unsafe
264core::core_arch::aarch64::neon::generatedvst1q_u32function* Neon intrinsic unsafe
265core::core_arch::aarch64::neon::generatedvst1q_u64function* Neon intrinsic unsafe
266core::core_arch::aarch64::neon::generatedvst1q_u8function* Neon intrinsic unsafe
267core::core_arch::aarch64::neon::generatedvst2_f64function* Neon intrinsic unsafe
268core::core_arch::aarch64::neon::generatedvst2_lane_f64function* Neon intrinsic unsafe
269core::core_arch::aarch64::neon::generatedvst2_lane_p64function* Neon intrinsic unsafe
270core::core_arch::aarch64::neon::generatedvst2_lane_s64function* Neon intrinsic unsafe
271core::core_arch::aarch64::neon::generatedvst2_lane_u64function* Neon intrinsic unsafe
272core::core_arch::aarch64::neon::generatedvst2q_f64function* Neon intrinsic unsafe
273core::core_arch::aarch64::neon::generatedvst2q_lane_f64function* Neon intrinsic unsafe
274core::core_arch::aarch64::neon::generatedvst2q_lane_p64function* Neon intrinsic unsafe
275core::core_arch::aarch64::neon::generatedvst2q_lane_p8function* Neon intrinsic unsafe
276core::core_arch::aarch64::neon::generatedvst2q_lane_s64function* Neon intrinsic unsafe
277core::core_arch::aarch64::neon::generatedvst2q_lane_s8function* Neon intrinsic unsafe
278core::core_arch::aarch64::neon::generatedvst2q_lane_u64function* Neon intrinsic unsafe
279core::core_arch::aarch64::neon::generatedvst2q_lane_u8function* Neon intrinsic unsafe
280core::core_arch::aarch64::neon::generatedvst2q_p64function* Neon intrinsic unsafe
281core::core_arch::aarch64::neon::generatedvst2q_s64function* Neon intrinsic unsafe
282core::core_arch::aarch64::neon::generatedvst2q_u64function* Neon intrinsic unsafe
283core::core_arch::aarch64::neon::generatedvst3_f64function* Neon intrinsic unsafe
284core::core_arch::aarch64::neon::generatedvst3_lane_f64function* Neon intrinsic unsafe
285core::core_arch::aarch64::neon::generatedvst3_lane_p64function* Neon intrinsic unsafe
286core::core_arch::aarch64::neon::generatedvst3_lane_s64function* Neon intrinsic unsafe
287core::core_arch::aarch64::neon::generatedvst3_lane_u64function* Neon intrinsic unsafe
288core::core_arch::aarch64::neon::generatedvst3q_f64function* Neon intrinsic unsafe
289core::core_arch::aarch64::neon::generatedvst3q_lane_f64function* Neon intrinsic unsafe
290core::core_arch::aarch64::neon::generatedvst3q_lane_p64function* Neon intrinsic unsafe
291core::core_arch::aarch64::neon::generatedvst3q_lane_p8function* Neon intrinsic unsafe
292core::core_arch::aarch64::neon::generatedvst3q_lane_s64function* Neon intrinsic unsafe
293core::core_arch::aarch64::neon::generatedvst3q_lane_s8function* Neon intrinsic unsafe
294core::core_arch::aarch64::neon::generatedvst3q_lane_u64function* Neon intrinsic unsafe
295core::core_arch::aarch64::neon::generatedvst3q_lane_u8function* Neon intrinsic unsafe
296core::core_arch::aarch64::neon::generatedvst3q_p64function* Neon intrinsic unsafe
297core::core_arch::aarch64::neon::generatedvst3q_s64function* Neon intrinsic unsafe
298core::core_arch::aarch64::neon::generatedvst3q_u64function* Neon intrinsic unsafe
299core::core_arch::aarch64::neon::generatedvst4_f64function* Neon intrinsic unsafe
300core::core_arch::aarch64::neon::generatedvst4_lane_f64function* Neon intrinsic unsafe
301core::core_arch::aarch64::neon::generatedvst4_lane_p64function* Neon intrinsic unsafe
302core::core_arch::aarch64::neon::generatedvst4_lane_s64function* Neon intrinsic unsafe
303core::core_arch::aarch64::neon::generatedvst4_lane_u64function* Neon intrinsic unsafe
304core::core_arch::aarch64::neon::generatedvst4q_f64function* Neon intrinsic unsafe
305core::core_arch::aarch64::neon::generatedvst4q_lane_f64function* Neon intrinsic unsafe
306core::core_arch::aarch64::neon::generatedvst4q_lane_p64function* Neon intrinsic unsafe
307core::core_arch::aarch64::neon::generatedvst4q_lane_p8function* Neon intrinsic unsafe
308core::core_arch::aarch64::neon::generatedvst4q_lane_s64function* Neon intrinsic unsafe
309core::core_arch::aarch64::neon::generatedvst4q_lane_s8function* Neon intrinsic unsafe
310core::core_arch::aarch64::neon::generatedvst4q_lane_u64function* Neon intrinsic unsafe
311core::core_arch::aarch64::neon::generatedvst4q_lane_u8function* Neon intrinsic unsafe
312core::core_arch::aarch64::neon::generatedvst4q_p64function* Neon intrinsic unsafe
313core::core_arch::aarch64::neon::generatedvst4q_s64function* Neon intrinsic unsafe
314core::core_arch::aarch64::neon::generatedvst4q_u64function* Neon intrinsic unsafe
315core::core_arch::aarch64::prefetch_prefetchfunction
316core::core_arch::aarch64::rand__rndrfunction
317core::core_arch::aarch64::rand__rndrrsfunction
318core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
319core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
320core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
321core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
322core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
323core::core_arch::aarch64::sve2::generatedsvldnt1_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
324core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
325core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
326core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
327core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
328core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
329core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
330core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
331core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
332core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
333core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
334core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
335core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
336core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
337core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
338core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
339core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
340core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
341core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
342core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
343core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
344core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
345core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
346core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
347core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
348core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
349core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
350core::core_arch::aarch64::sve2::generatedsvldnt1_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
351core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
352core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
353core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
354core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
355core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
356core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
357core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
358core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
359core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
360core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
361core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
362core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
363core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
364core::core_arch::aarch64::sve2::generatedsvldnt1sb_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
365core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
366core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
367core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
368core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
369core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
370core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
371core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
372core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
373core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
374core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
375core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
376core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
377core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
378core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
379core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
380core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
381core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
382core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
383core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
384core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
385core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
386core::core_arch::aarch64::sve2::generatedsvldnt1sh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
387core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
388core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
389core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
390core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
391core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
392core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
393core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
394core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
395core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
396core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
397core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
398core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
399core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
400core::core_arch::aarch64::sve2::generatedsvldnt1sw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
401core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
402core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
403core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
404core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
405core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
406core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
407core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
408core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
409core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
410core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
411core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
412core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
413core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
414core::core_arch::aarch64::sve2::generatedsvldnt1ub_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
415core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
416core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
417core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
418core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
419core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
420core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
421core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
422core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
423core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
424core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
425core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
426core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
427core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
428core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
429core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
430core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
431core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
432core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
433core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
434core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
435core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
436core::core_arch::aarch64::sve2::generatedsvldnt1uh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
437core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
438core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
439core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
440core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
441core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
442core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
443core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
444core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
445core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
446core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
447core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
448core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
449core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
450core::core_arch::aarch64::sve2::generatedsvldnt1uw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
451core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
452core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
453core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
454core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
455core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
456core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
457core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
458core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
459core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
460core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
461core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
462core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
463core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
464core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
465core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
466core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
467core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
468core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
469core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
470core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
471core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
472core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
473core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
474core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
475core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
476core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
477core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
478core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
479core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
480core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
481core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
482core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
483core::core_arch::aarch64::sve2::generatedsvstnt1_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
484core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
485core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
486core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
487core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
488core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
489core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
490core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
491core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
492core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
493core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
494core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
495core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
496core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
497core::core_arch::aarch64::sve2::generatedsvstnt1b_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
498core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
499core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
500core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
501core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
502core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
503core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
504core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
505core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
506core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
507core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
508core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
509core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
510core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
511core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
512core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
513core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
514core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
515core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
516core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
517core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
518core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
519core::core_arch::aarch64::sve2::generatedsvstnt1h_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
520core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
521core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
522core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
523core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
524core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
525core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
526core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
527core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
528core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
529core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it. * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
530core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
531core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
532core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
533core::core_arch::aarch64::sve2::generatedsvstnt1w_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
534core::core_arch::aarch64::sve2::generatedsvwhilerw_f32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
535core::core_arch::aarch64::sve2::generatedsvwhilerw_f64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
536core::core_arch::aarch64::sve2::generatedsvwhilerw_s16function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
537core::core_arch::aarch64::sve2::generatedsvwhilerw_s32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
538core::core_arch::aarch64::sve2::generatedsvwhilerw_s64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
539core::core_arch::aarch64::sve2::generatedsvwhilerw_s8function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
540core::core_arch::aarch64::sve2::generatedsvwhilerw_u16function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
541core::core_arch::aarch64::sve2::generatedsvwhilerw_u32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
542core::core_arch::aarch64::sve2::generatedsvwhilerw_u64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
543core::core_arch::aarch64::sve2::generatedsvwhilerw_u8function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
544core::core_arch::aarch64::sve2::generatedsvwhilewr_f32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
545core::core_arch::aarch64::sve2::generatedsvwhilewr_f64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
546core::core_arch::aarch64::sve2::generatedsvwhilewr_s16function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
547core::core_arch::aarch64::sve2::generatedsvwhilewr_s32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
548core::core_arch::aarch64::sve2::generatedsvwhilewr_s64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
549core::core_arch::aarch64::sve2::generatedsvwhilewr_s8function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
550core::core_arch::aarch64::sve2::generatedsvwhilewr_u16function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
551core::core_arch::aarch64::sve2::generatedsvwhilewr_u32function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
552core::core_arch::aarch64::sve2::generatedsvwhilewr_u64function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
553core::core_arch::aarch64::sve2::generatedsvwhilewr_u8function* [`pointer::byte_offset_from`](pointer#method.byte_offset_from) safety constraints must be met for at least the base pointers, `op1` and `op2`.
554core::core_arch::aarch64::sve::generatedsvld1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
555core::core_arch::aarch64::sve::generatedsvld1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
556core::core_arch::aarch64::sve::generatedsvld1_gather_s32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
557core::core_arch::aarch64::sve::generatedsvld1_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
558core::core_arch::aarch64::sve::generatedsvld1_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
559core::core_arch::aarch64::sve::generatedsvld1_gather_s32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
560core::core_arch::aarch64::sve::generatedsvld1_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
561core::core_arch::aarch64::sve::generatedsvld1_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
562core::core_arch::aarch64::sve::generatedsvld1_gather_s64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
563core::core_arch::aarch64::sve::generatedsvld1_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
564core::core_arch::aarch64::sve::generatedsvld1_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
565core::core_arch::aarch64::sve::generatedsvld1_gather_s64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
566core::core_arch::aarch64::sve::generatedsvld1_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
567core::core_arch::aarch64::sve::generatedsvld1_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
568core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
569core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
570core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
571core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
572core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
573core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
574core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
575core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
576core::core_arch::aarch64::sve::generatedsvld1_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
577core::core_arch::aarch64::sve::generatedsvld1_gather_u32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
578core::core_arch::aarch64::sve::generatedsvld1_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
579core::core_arch::aarch64::sve::generatedsvld1_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
580core::core_arch::aarch64::sve::generatedsvld1_gather_u32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
581core::core_arch::aarch64::sve::generatedsvld1_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
582core::core_arch::aarch64::sve::generatedsvld1_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
583core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
584core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
585core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
586core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
587core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
588core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
589core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
590core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
591core::core_arch::aarch64::sve::generatedsvld1_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
592core::core_arch::aarch64::sve::generatedsvld1_gather_u64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
593core::core_arch::aarch64::sve::generatedsvld1_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
594core::core_arch::aarch64::sve::generatedsvld1_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
595core::core_arch::aarch64::sve::generatedsvld1_gather_u64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
596core::core_arch::aarch64::sve::generatedsvld1_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
597core::core_arch::aarch64::sve::generatedsvld1_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
598core::core_arch::aarch64::sve::generatedsvld1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
599core::core_arch::aarch64::sve::generatedsvld1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
600core::core_arch::aarch64::sve::generatedsvld1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
601core::core_arch::aarch64::sve::generatedsvld1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
602core::core_arch::aarch64::sve::generatedsvld1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
603core::core_arch::aarch64::sve::generatedsvld1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
604core::core_arch::aarch64::sve::generatedsvld1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
605core::core_arch::aarch64::sve::generatedsvld1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
606core::core_arch::aarch64::sve::generatedsvld1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
607core::core_arch::aarch64::sve::generatedsvld1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
608core::core_arch::aarch64::sve::generatedsvld1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
609core::core_arch::aarch64::sve::generatedsvld1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
610core::core_arch::aarch64::sve::generatedsvld1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
611core::core_arch::aarch64::sve::generatedsvld1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
612core::core_arch::aarch64::sve::generatedsvld1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
613core::core_arch::aarch64::sve::generatedsvld1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
614core::core_arch::aarch64::sve::generatedsvld1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
615core::core_arch::aarch64::sve::generatedsvld1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
616core::core_arch::aarch64::sve::generatedsvld1ro_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
617core::core_arch::aarch64::sve::generatedsvld1ro_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
618core::core_arch::aarch64::sve::generatedsvld1ro_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
619core::core_arch::aarch64::sve::generatedsvld1ro_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
620core::core_arch::aarch64::sve::generatedsvld1ro_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
621core::core_arch::aarch64::sve::generatedsvld1ro_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
622core::core_arch::aarch64::sve::generatedsvld1ro_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
623core::core_arch::aarch64::sve::generatedsvld1ro_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
624core::core_arch::aarch64::sve::generatedsvld1ro_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
625core::core_arch::aarch64::sve::generatedsvld1ro_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
626core::core_arch::aarch64::sve::generatedsvld1rq_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
627core::core_arch::aarch64::sve::generatedsvld1rq_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
628core::core_arch::aarch64::sve::generatedsvld1rq_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
629core::core_arch::aarch64::sve::generatedsvld1rq_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
630core::core_arch::aarch64::sve::generatedsvld1rq_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
631core::core_arch::aarch64::sve::generatedsvld1rq_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
632core::core_arch::aarch64::sve::generatedsvld1rq_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
633core::core_arch::aarch64::sve::generatedsvld1rq_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
634core::core_arch::aarch64::sve::generatedsvld1rq_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
635core::core_arch::aarch64::sve::generatedsvld1rq_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
636core::core_arch::aarch64::sve::generatedsvld1sb_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
637core::core_arch::aarch64::sve::generatedsvld1sb_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
638core::core_arch::aarch64::sve::generatedsvld1sb_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
639core::core_arch::aarch64::sve::generatedsvld1sb_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
640core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
641core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
642core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
643core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
644core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
645core::core_arch::aarch64::sve::generatedsvld1sb_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
646core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
647core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
648core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
649core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
650core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
651core::core_arch::aarch64::sve::generatedsvld1sb_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
652core::core_arch::aarch64::sve::generatedsvld1sb_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
653core::core_arch::aarch64::sve::generatedsvld1sb_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
654core::core_arch::aarch64::sve::generatedsvld1sb_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
655core::core_arch::aarch64::sve::generatedsvld1sb_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
656core::core_arch::aarch64::sve::generatedsvld1sb_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
657core::core_arch::aarch64::sve::generatedsvld1sb_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
658core::core_arch::aarch64::sve::generatedsvld1sb_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
659core::core_arch::aarch64::sve::generatedsvld1sb_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
660core::core_arch::aarch64::sve::generatedsvld1sb_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
661core::core_arch::aarch64::sve::generatedsvld1sb_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
662core::core_arch::aarch64::sve::generatedsvld1sb_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
663core::core_arch::aarch64::sve::generatedsvld1sb_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
664core::core_arch::aarch64::sve::generatedsvld1sh_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
665core::core_arch::aarch64::sve::generatedsvld1sh_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
666core::core_arch::aarch64::sve::generatedsvld1sh_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
667core::core_arch::aarch64::sve::generatedsvld1sh_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
668core::core_arch::aarch64::sve::generatedsvld1sh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
669core::core_arch::aarch64::sve::generatedsvld1sh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
670core::core_arch::aarch64::sve::generatedsvld1sh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
671core::core_arch::aarch64::sve::generatedsvld1sh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
672core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
673core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
674core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
675core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
676core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
677core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
678core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
679core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
680core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
681core::core_arch::aarch64::sve::generatedsvld1sh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
682core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
683core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
684core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
685core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
686core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
687core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
688core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
689core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
690core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
691core::core_arch::aarch64::sve::generatedsvld1sh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
692core::core_arch::aarch64::sve::generatedsvld1sh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
693core::core_arch::aarch64::sve::generatedsvld1sh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
694core::core_arch::aarch64::sve::generatedsvld1sh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
695core::core_arch::aarch64::sve::generatedsvld1sh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
696core::core_arch::aarch64::sve::generatedsvld1sh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
697core::core_arch::aarch64::sve::generatedsvld1sh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
698core::core_arch::aarch64::sve::generatedsvld1sh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
699core::core_arch::aarch64::sve::generatedsvld1sh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
700core::core_arch::aarch64::sve::generatedsvld1sw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
701core::core_arch::aarch64::sve::generatedsvld1sw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
702core::core_arch::aarch64::sve::generatedsvld1sw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
703core::core_arch::aarch64::sve::generatedsvld1sw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
704core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
705core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
706core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
707core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
708core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
709core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
710core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
711core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
712core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
713core::core_arch::aarch64::sve::generatedsvld1sw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
714core::core_arch::aarch64::sve::generatedsvld1sw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
715core::core_arch::aarch64::sve::generatedsvld1sw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
716core::core_arch::aarch64::sve::generatedsvld1sw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
717core::core_arch::aarch64::sve::generatedsvld1sw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
718core::core_arch::aarch64::sve::generatedsvld1ub_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
719core::core_arch::aarch64::sve::generatedsvld1ub_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
720core::core_arch::aarch64::sve::generatedsvld1ub_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
721core::core_arch::aarch64::sve::generatedsvld1ub_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
722core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
723core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
724core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
725core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
726core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
727core::core_arch::aarch64::sve::generatedsvld1ub_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
728core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
729core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
730core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
731core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
732core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
733core::core_arch::aarch64::sve::generatedsvld1ub_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
734core::core_arch::aarch64::sve::generatedsvld1ub_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
735core::core_arch::aarch64::sve::generatedsvld1ub_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
736core::core_arch::aarch64::sve::generatedsvld1ub_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
737core::core_arch::aarch64::sve::generatedsvld1ub_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
738core::core_arch::aarch64::sve::generatedsvld1ub_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
739core::core_arch::aarch64::sve::generatedsvld1ub_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
740core::core_arch::aarch64::sve::generatedsvld1ub_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
741core::core_arch::aarch64::sve::generatedsvld1ub_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
742core::core_arch::aarch64::sve::generatedsvld1ub_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
743core::core_arch::aarch64::sve::generatedsvld1ub_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
744core::core_arch::aarch64::sve::generatedsvld1ub_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
745core::core_arch::aarch64::sve::generatedsvld1ub_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
746core::core_arch::aarch64::sve::generatedsvld1uh_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
747core::core_arch::aarch64::sve::generatedsvld1uh_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
748core::core_arch::aarch64::sve::generatedsvld1uh_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
749core::core_arch::aarch64::sve::generatedsvld1uh_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
750core::core_arch::aarch64::sve::generatedsvld1uh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
751core::core_arch::aarch64::sve::generatedsvld1uh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
752core::core_arch::aarch64::sve::generatedsvld1uh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
753core::core_arch::aarch64::sve::generatedsvld1uh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
754core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
755core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
756core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
757core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
758core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
759core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
760core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
761core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
762core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
763core::core_arch::aarch64::sve::generatedsvld1uh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
764core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
765core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
766core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
767core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
768core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
769core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
770core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
771core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
772core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
773core::core_arch::aarch64::sve::generatedsvld1uh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
774core::core_arch::aarch64::sve::generatedsvld1uh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
775core::core_arch::aarch64::sve::generatedsvld1uh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
776core::core_arch::aarch64::sve::generatedsvld1uh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
777core::core_arch::aarch64::sve::generatedsvld1uh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
778core::core_arch::aarch64::sve::generatedsvld1uh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
779core::core_arch::aarch64::sve::generatedsvld1uh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
780core::core_arch::aarch64::sve::generatedsvld1uh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
781core::core_arch::aarch64::sve::generatedsvld1uh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
782core::core_arch::aarch64::sve::generatedsvld1uw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
783core::core_arch::aarch64::sve::generatedsvld1uw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
784core::core_arch::aarch64::sve::generatedsvld1uw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
785core::core_arch::aarch64::sve::generatedsvld1uw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
786core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
787core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
788core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
789core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
790core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
791core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
792core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
793core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
794core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
795core::core_arch::aarch64::sve::generatedsvld1uw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
796core::core_arch::aarch64::sve::generatedsvld1uw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
797core::core_arch::aarch64::sve::generatedsvld1uw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
798core::core_arch::aarch64::sve::generatedsvld1uw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
799core::core_arch::aarch64::sve::generatedsvld1uw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
800core::core_arch::aarch64::sve::generatedsvld2_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
801core::core_arch::aarch64::sve::generatedsvld2_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
802core::core_arch::aarch64::sve::generatedsvld2_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
803core::core_arch::aarch64::sve::generatedsvld2_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
804core::core_arch::aarch64::sve::generatedsvld2_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
805core::core_arch::aarch64::sve::generatedsvld2_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
806core::core_arch::aarch64::sve::generatedsvld2_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
807core::core_arch::aarch64::sve::generatedsvld2_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
808core::core_arch::aarch64::sve::generatedsvld2_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
809core::core_arch::aarch64::sve::generatedsvld2_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
810core::core_arch::aarch64::sve::generatedsvld2_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
811core::core_arch::aarch64::sve::generatedsvld2_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
812core::core_arch::aarch64::sve::generatedsvld2_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
813core::core_arch::aarch64::sve::generatedsvld2_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
814core::core_arch::aarch64::sve::generatedsvld2_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
815core::core_arch::aarch64::sve::generatedsvld2_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
816core::core_arch::aarch64::sve::generatedsvld2_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
817core::core_arch::aarch64::sve::generatedsvld2_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
818core::core_arch::aarch64::sve::generatedsvld2_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
819core::core_arch::aarch64::sve::generatedsvld2_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
820core::core_arch::aarch64::sve::generatedsvld3_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
821core::core_arch::aarch64::sve::generatedsvld3_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
822core::core_arch::aarch64::sve::generatedsvld3_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
823core::core_arch::aarch64::sve::generatedsvld3_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
824core::core_arch::aarch64::sve::generatedsvld3_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
825core::core_arch::aarch64::sve::generatedsvld3_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
826core::core_arch::aarch64::sve::generatedsvld3_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
827core::core_arch::aarch64::sve::generatedsvld3_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
828core::core_arch::aarch64::sve::generatedsvld3_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
829core::core_arch::aarch64::sve::generatedsvld3_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
830core::core_arch::aarch64::sve::generatedsvld3_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
831core::core_arch::aarch64::sve::generatedsvld3_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
832core::core_arch::aarch64::sve::generatedsvld3_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
833core::core_arch::aarch64::sve::generatedsvld3_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
834core::core_arch::aarch64::sve::generatedsvld3_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
835core::core_arch::aarch64::sve::generatedsvld3_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
836core::core_arch::aarch64::sve::generatedsvld3_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
837core::core_arch::aarch64::sve::generatedsvld3_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
838core::core_arch::aarch64::sve::generatedsvld3_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
839core::core_arch::aarch64::sve::generatedsvld3_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
840core::core_arch::aarch64::sve::generatedsvld4_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
841core::core_arch::aarch64::sve::generatedsvld4_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
842core::core_arch::aarch64::sve::generatedsvld4_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
843core::core_arch::aarch64::sve::generatedsvld4_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
844core::core_arch::aarch64::sve::generatedsvld4_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
845core::core_arch::aarch64::sve::generatedsvld4_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
846core::core_arch::aarch64::sve::generatedsvld4_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
847core::core_arch::aarch64::sve::generatedsvld4_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
848core::core_arch::aarch64::sve::generatedsvld4_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
849core::core_arch::aarch64::sve::generatedsvld4_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
850core::core_arch::aarch64::sve::generatedsvld4_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
851core::core_arch::aarch64::sve::generatedsvld4_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
852core::core_arch::aarch64::sve::generatedsvld4_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
853core::core_arch::aarch64::sve::generatedsvld4_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
854core::core_arch::aarch64::sve::generatedsvld4_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
855core::core_arch::aarch64::sve::generatedsvld4_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
856core::core_arch::aarch64::sve::generatedsvld4_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
857core::core_arch::aarch64::sve::generatedsvld4_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
858core::core_arch::aarch64::sve::generatedsvld4_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
859core::core_arch::aarch64::sve::generatedsvld4_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
860core::core_arch::aarch64::sve::generatedsvldff1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
861core::core_arch::aarch64::sve::generatedsvldff1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
862core::core_arch::aarch64::sve::generatedsvldff1_gather_s32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
863core::core_arch::aarch64::sve::generatedsvldff1_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
864core::core_arch::aarch64::sve::generatedsvldff1_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
865core::core_arch::aarch64::sve::generatedsvldff1_gather_s32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
866core::core_arch::aarch64::sve::generatedsvldff1_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
867core::core_arch::aarch64::sve::generatedsvldff1_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
868core::core_arch::aarch64::sve::generatedsvldff1_gather_s64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
869core::core_arch::aarch64::sve::generatedsvldff1_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
870core::core_arch::aarch64::sve::generatedsvldff1_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
871core::core_arch::aarch64::sve::generatedsvldff1_gather_s64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
872core::core_arch::aarch64::sve::generatedsvldff1_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
873core::core_arch::aarch64::sve::generatedsvldff1_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
874core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
875core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
876core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
877core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
878core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
879core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
880core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
881core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
882core::core_arch::aarch64::sve::generatedsvldff1_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
883core::core_arch::aarch64::sve::generatedsvldff1_gather_u32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
884core::core_arch::aarch64::sve::generatedsvldff1_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
885core::core_arch::aarch64::sve::generatedsvldff1_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
886core::core_arch::aarch64::sve::generatedsvldff1_gather_u32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
887core::core_arch::aarch64::sve::generatedsvldff1_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
888core::core_arch::aarch64::sve::generatedsvldff1_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
889core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
890core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
891core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
892core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
893core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
894core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
895core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
896core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
897core::core_arch::aarch64::sve::generatedsvldff1_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
898core::core_arch::aarch64::sve::generatedsvldff1_gather_u64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
899core::core_arch::aarch64::sve::generatedsvldff1_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
900core::core_arch::aarch64::sve::generatedsvldff1_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
901core::core_arch::aarch64::sve::generatedsvldff1_gather_u64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
902core::core_arch::aarch64::sve::generatedsvldff1_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
903core::core_arch::aarch64::sve::generatedsvldff1_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
904core::core_arch::aarch64::sve::generatedsvldff1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
905core::core_arch::aarch64::sve::generatedsvldff1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
906core::core_arch::aarch64::sve::generatedsvldff1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
907core::core_arch::aarch64::sve::generatedsvldff1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
908core::core_arch::aarch64::sve::generatedsvldff1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
909core::core_arch::aarch64::sve::generatedsvldff1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
910core::core_arch::aarch64::sve::generatedsvldff1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
911core::core_arch::aarch64::sve::generatedsvldff1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
912core::core_arch::aarch64::sve::generatedsvldff1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
913core::core_arch::aarch64::sve::generatedsvldff1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
914core::core_arch::aarch64::sve::generatedsvldff1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
915core::core_arch::aarch64::sve::generatedsvldff1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
916core::core_arch::aarch64::sve::generatedsvldff1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
917core::core_arch::aarch64::sve::generatedsvldff1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
918core::core_arch::aarch64::sve::generatedsvldff1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
919core::core_arch::aarch64::sve::generatedsvldff1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
920core::core_arch::aarch64::sve::generatedsvldff1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
921core::core_arch::aarch64::sve::generatedsvldff1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
922core::core_arch::aarch64::sve::generatedsvldff1sb_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
923core::core_arch::aarch64::sve::generatedsvldff1sb_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
924core::core_arch::aarch64::sve::generatedsvldff1sb_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
925core::core_arch::aarch64::sve::generatedsvldff1sb_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
926core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
927core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
928core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
929core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
930core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
931core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
932core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
933core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
934core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
935core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
936core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
937core::core_arch::aarch64::sve::generatedsvldff1sb_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
938core::core_arch::aarch64::sve::generatedsvldff1sb_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
939core::core_arch::aarch64::sve::generatedsvldff1sb_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
940core::core_arch::aarch64::sve::generatedsvldff1sb_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
941core::core_arch::aarch64::sve::generatedsvldff1sb_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
942core::core_arch::aarch64::sve::generatedsvldff1sb_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
943core::core_arch::aarch64::sve::generatedsvldff1sb_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
944core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
945core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
946core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
947core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
948core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
949core::core_arch::aarch64::sve::generatedsvldff1sb_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
950core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
951core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
952core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
953core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
954core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
955core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
956core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
957core::core_arch::aarch64::sve::generatedsvldff1sh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
958core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
959core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
960core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
961core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
962core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
963core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
964core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
965core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
966core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
967core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
968core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
969core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
970core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
971core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
972core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
973core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
974core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
975core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
976core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
977core::core_arch::aarch64::sve::generatedsvldff1sh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
978core::core_arch::aarch64::sve::generatedsvldff1sh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
979core::core_arch::aarch64::sve::generatedsvldff1sh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
980core::core_arch::aarch64::sve::generatedsvldff1sh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
981core::core_arch::aarch64::sve::generatedsvldff1sh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
982core::core_arch::aarch64::sve::generatedsvldff1sh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
983core::core_arch::aarch64::sve::generatedsvldff1sh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
984core::core_arch::aarch64::sve::generatedsvldff1sh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
985core::core_arch::aarch64::sve::generatedsvldff1sh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
986core::core_arch::aarch64::sve::generatedsvldff1sw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
987core::core_arch::aarch64::sve::generatedsvldff1sw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
988core::core_arch::aarch64::sve::generatedsvldff1sw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
989core::core_arch::aarch64::sve::generatedsvldff1sw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
990core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
991core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
992core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
993core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
994core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
995core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
996core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
997core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
998core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
999core::core_arch::aarch64::sve::generatedsvldff1sw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1000core::core_arch::aarch64::sve::generatedsvldff1sw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1001core::core_arch::aarch64::sve::generatedsvldff1sw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1002core::core_arch::aarch64::sve::generatedsvldff1sw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1003core::core_arch::aarch64::sve::generatedsvldff1sw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1004core::core_arch::aarch64::sve::generatedsvldff1ub_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1005core::core_arch::aarch64::sve::generatedsvldff1ub_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1006core::core_arch::aarch64::sve::generatedsvldff1ub_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1007core::core_arch::aarch64::sve::generatedsvldff1ub_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1008core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1009core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1010core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1011core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1012core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1013core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1014core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1015core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1016core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1017core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1018core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1019core::core_arch::aarch64::sve::generatedsvldff1ub_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1020core::core_arch::aarch64::sve::generatedsvldff1ub_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1021core::core_arch::aarch64::sve::generatedsvldff1ub_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1022core::core_arch::aarch64::sve::generatedsvldff1ub_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1023core::core_arch::aarch64::sve::generatedsvldff1ub_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1024core::core_arch::aarch64::sve::generatedsvldff1ub_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1025core::core_arch::aarch64::sve::generatedsvldff1ub_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1026core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1027core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1028core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1029core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1030core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1031core::core_arch::aarch64::sve::generatedsvldff1ub_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1032core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1033core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1034core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1035core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1036core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1037core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1038core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1039core::core_arch::aarch64::sve::generatedsvldff1uh_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1040core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1041core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1042core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1043core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1044core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1045core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1046core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1047core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1048core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1049core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1050core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1051core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1052core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1053core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1054core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1055core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1056core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1057core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1058core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1059core::core_arch::aarch64::sve::generatedsvldff1uh_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1060core::core_arch::aarch64::sve::generatedsvldff1uh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1061core::core_arch::aarch64::sve::generatedsvldff1uh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1062core::core_arch::aarch64::sve::generatedsvldff1uh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1063core::core_arch::aarch64::sve::generatedsvldff1uh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1064core::core_arch::aarch64::sve::generatedsvldff1uh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1065core::core_arch::aarch64::sve::generatedsvldff1uh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1066core::core_arch::aarch64::sve::generatedsvldff1uh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1067core::core_arch::aarch64::sve::generatedsvldff1uh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1068core::core_arch::aarch64::sve::generatedsvldff1uw_gather_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1069core::core_arch::aarch64::sve::generatedsvldff1uw_gather_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1070core::core_arch::aarch64::sve::generatedsvldff1uw_gather_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1071core::core_arch::aarch64::sve::generatedsvldff1uw_gather_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1072core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1073core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1074core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1075core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1076core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1077core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details. * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1078core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1079core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1080core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1081core::core_arch::aarch64::sve::generatedsvldff1uw_gather_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1082core::core_arch::aarch64::sve::generatedsvldff1uw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1083core::core_arch::aarch64::sve::generatedsvldff1uw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1084core::core_arch::aarch64::sve::generatedsvldff1uw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1085core::core_arch::aarch64::sve::generatedsvldff1uw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and first-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1086core::core_arch::aarch64::sve::generatedsvldnf1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1087core::core_arch::aarch64::sve::generatedsvldnf1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1088core::core_arch::aarch64::sve::generatedsvldnf1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1089core::core_arch::aarch64::sve::generatedsvldnf1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1090core::core_arch::aarch64::sve::generatedsvldnf1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1091core::core_arch::aarch64::sve::generatedsvldnf1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1092core::core_arch::aarch64::sve::generatedsvldnf1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1093core::core_arch::aarch64::sve::generatedsvldnf1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1094core::core_arch::aarch64::sve::generatedsvldnf1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1095core::core_arch::aarch64::sve::generatedsvldnf1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1096core::core_arch::aarch64::sve::generatedsvldnf1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1097core::core_arch::aarch64::sve::generatedsvldnf1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1098core::core_arch::aarch64::sve::generatedsvldnf1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1099core::core_arch::aarch64::sve::generatedsvldnf1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1100core::core_arch::aarch64::sve::generatedsvldnf1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1101core::core_arch::aarch64::sve::generatedsvldnf1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1102core::core_arch::aarch64::sve::generatedsvldnf1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1103core::core_arch::aarch64::sve::generatedsvldnf1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1104core::core_arch::aarch64::sve::generatedsvldnf1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1105core::core_arch::aarch64::sve::generatedsvldnf1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1106core::core_arch::aarch64::sve::generatedsvldnf1sb_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1107core::core_arch::aarch64::sve::generatedsvldnf1sb_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1108core::core_arch::aarch64::sve::generatedsvldnf1sb_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1109core::core_arch::aarch64::sve::generatedsvldnf1sb_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1110core::core_arch::aarch64::sve::generatedsvldnf1sb_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1111core::core_arch::aarch64::sve::generatedsvldnf1sb_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1112core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1113core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1114core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1115core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1116core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1117core::core_arch::aarch64::sve::generatedsvldnf1sb_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1118core::core_arch::aarch64::sve::generatedsvldnf1sh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1119core::core_arch::aarch64::sve::generatedsvldnf1sh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1120core::core_arch::aarch64::sve::generatedsvldnf1sh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1121core::core_arch::aarch64::sve::generatedsvldnf1sh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1122core::core_arch::aarch64::sve::generatedsvldnf1sh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1123core::core_arch::aarch64::sve::generatedsvldnf1sh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1124core::core_arch::aarch64::sve::generatedsvldnf1sh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1125core::core_arch::aarch64::sve::generatedsvldnf1sh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1126core::core_arch::aarch64::sve::generatedsvldnf1sw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1127core::core_arch::aarch64::sve::generatedsvldnf1sw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1128core::core_arch::aarch64::sve::generatedsvldnf1sw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1129core::core_arch::aarch64::sve::generatedsvldnf1sw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1130core::core_arch::aarch64::sve::generatedsvldnf1ub_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1131core::core_arch::aarch64::sve::generatedsvldnf1ub_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1132core::core_arch::aarch64::sve::generatedsvldnf1ub_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1133core::core_arch::aarch64::sve::generatedsvldnf1ub_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1134core::core_arch::aarch64::sve::generatedsvldnf1ub_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1135core::core_arch::aarch64::sve::generatedsvldnf1ub_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1136core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1137core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1138core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1139core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1140core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1141core::core_arch::aarch64::sve::generatedsvldnf1ub_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1142core::core_arch::aarch64::sve::generatedsvldnf1uh_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1143core::core_arch::aarch64::sve::generatedsvldnf1uh_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1144core::core_arch::aarch64::sve::generatedsvldnf1uh_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1145core::core_arch::aarch64::sve::generatedsvldnf1uh_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1146core::core_arch::aarch64::sve::generatedsvldnf1uh_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1147core::core_arch::aarch64::sve::generatedsvldnf1uh_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1148core::core_arch::aarch64::sve::generatedsvldnf1uh_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1149core::core_arch::aarch64::sve::generatedsvldnf1uh_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1150core::core_arch::aarch64::sve::generatedsvldnf1uw_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1151core::core_arch::aarch64::sve::generatedsvldnf1uw_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1152core::core_arch::aarch64::sve::generatedsvldnf1uw_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1153core::core_arch::aarch64::sve::generatedsvldnf1uw_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`, the first-fault register (`FFR`) and non-faulting behaviour). * Result lanes corresponding to inactive FFR lanes (either before or as a result of this intrinsic) have "CONSTRAINED UNPREDICTABLE" values, irrespective of predication. Refer to architectural documentation for details.
1154core::core_arch::aarch64::sve::generatedsvldnt1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1155core::core_arch::aarch64::sve::generatedsvldnt1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1156core::core_arch::aarch64::sve::generatedsvldnt1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1157core::core_arch::aarch64::sve::generatedsvldnt1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1158core::core_arch::aarch64::sve::generatedsvldnt1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1159core::core_arch::aarch64::sve::generatedsvldnt1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1160core::core_arch::aarch64::sve::generatedsvldnt1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1161core::core_arch::aarch64::sve::generatedsvldnt1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1162core::core_arch::aarch64::sve::generatedsvldnt1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1163core::core_arch::aarch64::sve::generatedsvldnt1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1164core::core_arch::aarch64::sve::generatedsvldnt1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1165core::core_arch::aarch64::sve::generatedsvldnt1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1166core::core_arch::aarch64::sve::generatedsvldnt1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1167core::core_arch::aarch64::sve::generatedsvldnt1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1168core::core_arch::aarch64::sve::generatedsvldnt1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1169core::core_arch::aarch64::sve::generatedsvldnt1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1170core::core_arch::aarch64::sve::generatedsvldnt1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1171core::core_arch::aarch64::sve::generatedsvldnt1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1172core::core_arch::aarch64::sve::generatedsvldnt1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1173core::core_arch::aarch64::sve::generatedsvldnt1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1174core::core_arch::aarch64::sve::generatedsvprfbfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1175core::core_arch::aarch64::sve::generatedsvprfb_gather_s32offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1176core::core_arch::aarch64::sve::generatedsvprfb_gather_s64offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1177core::core_arch::aarch64::sve::generatedsvprfb_gather_u32basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1178core::core_arch::aarch64::sve::generatedsvprfb_gather_u32base_offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1179core::core_arch::aarch64::sve::generatedsvprfb_gather_u32offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1180core::core_arch::aarch64::sve::generatedsvprfb_gather_u64basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1181core::core_arch::aarch64::sve::generatedsvprfb_gather_u64base_offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1182core::core_arch::aarch64::sve::generatedsvprfb_gather_u64offsetfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1183core::core_arch::aarch64::sve::generatedsvprfb_vnumfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time.
1184core::core_arch::aarch64::sve::generatedsvprfdfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1185core::core_arch::aarch64::sve::generatedsvprfd_gather_s32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1186core::core_arch::aarch64::sve::generatedsvprfd_gather_s64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1187core::core_arch::aarch64::sve::generatedsvprfd_gather_u32basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1188core::core_arch::aarch64::sve::generatedsvprfd_gather_u32base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1189core::core_arch::aarch64::sve::generatedsvprfd_gather_u32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1190core::core_arch::aarch64::sve::generatedsvprfd_gather_u64basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1191core::core_arch::aarch64::sve::generatedsvprfd_gather_u64base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1192core::core_arch::aarch64::sve::generatedsvprfd_gather_u64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1193core::core_arch::aarch64::sve::generatedsvprfd_vnumfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time.
1194core::core_arch::aarch64::sve::generatedsvprfhfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1195core::core_arch::aarch64::sve::generatedsvprfh_gather_s32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1196core::core_arch::aarch64::sve::generatedsvprfh_gather_s64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1197core::core_arch::aarch64::sve::generatedsvprfh_gather_u32basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1198core::core_arch::aarch64::sve::generatedsvprfh_gather_u32base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1199core::core_arch::aarch64::sve::generatedsvprfh_gather_u32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1200core::core_arch::aarch64::sve::generatedsvprfh_gather_u64basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1201core::core_arch::aarch64::sve::generatedsvprfh_gather_u64base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1202core::core_arch::aarch64::sve::generatedsvprfh_gather_u64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1203core::core_arch::aarch64::sve::generatedsvprfh_vnumfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time.
1204core::core_arch::aarch64::sve::generatedsvprfwfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1205core::core_arch::aarch64::sve::generatedsvprfw_gather_s32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1206core::core_arch::aarch64::sve::generatedsvprfw_gather_s64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1207core::core_arch::aarch64::sve::generatedsvprfw_gather_u32basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1208core::core_arch::aarch64::sve::generatedsvprfw_gather_u32base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1209core::core_arch::aarch64::sve::generatedsvprfw_gather_u32indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1210core::core_arch::aarch64::sve::generatedsvprfw_gather_u64basefunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1211core::core_arch::aarch64::sve::generatedsvprfw_gather_u64base_indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1212core::core_arch::aarch64::sve::generatedsvprfw_gather_u64indexfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`).
1213core::core_arch::aarch64::sve::generatedsvprfw_vnumfunction* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time.
1214core::core_arch::aarch64::sve::generatedsvst1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1215core::core_arch::aarch64::sve::generatedsvst1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1216core::core_arch::aarch64::sve::generatedsvst1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1217core::core_arch::aarch64::sve::generatedsvst1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1218core::core_arch::aarch64::sve::generatedsvst1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1219core::core_arch::aarch64::sve::generatedsvst1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1220core::core_arch::aarch64::sve::generatedsvst1_scatter_s32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1221core::core_arch::aarch64::sve::generatedsvst1_scatter_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1222core::core_arch::aarch64::sve::generatedsvst1_scatter_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1223core::core_arch::aarch64::sve::generatedsvst1_scatter_s32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1224core::core_arch::aarch64::sve::generatedsvst1_scatter_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1225core::core_arch::aarch64::sve::generatedsvst1_scatter_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1226core::core_arch::aarch64::sve::generatedsvst1_scatter_s64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1227core::core_arch::aarch64::sve::generatedsvst1_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1228core::core_arch::aarch64::sve::generatedsvst1_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1229core::core_arch::aarch64::sve::generatedsvst1_scatter_s64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1230core::core_arch::aarch64::sve::generatedsvst1_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1231core::core_arch::aarch64::sve::generatedsvst1_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1232core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1233core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1234core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1235core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1236core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1237core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1238core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1239core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1240core::core_arch::aarch64::sve::generatedsvst1_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1241core::core_arch::aarch64::sve::generatedsvst1_scatter_u32index_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1242core::core_arch::aarch64::sve::generatedsvst1_scatter_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1243core::core_arch::aarch64::sve::generatedsvst1_scatter_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1244core::core_arch::aarch64::sve::generatedsvst1_scatter_u32offset_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1245core::core_arch::aarch64::sve::generatedsvst1_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1246core::core_arch::aarch64::sve::generatedsvst1_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1247core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1248core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1249core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1250core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1251core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1252core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1253core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1254core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1255core::core_arch::aarch64::sve::generatedsvst1_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1256core::core_arch::aarch64::sve::generatedsvst1_scatter_u64index_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1257core::core_arch::aarch64::sve::generatedsvst1_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1258core::core_arch::aarch64::sve::generatedsvst1_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1259core::core_arch::aarch64::sve::generatedsvst1_scatter_u64offset_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1260core::core_arch::aarch64::sve::generatedsvst1_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1261core::core_arch::aarch64::sve::generatedsvst1_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1262core::core_arch::aarch64::sve::generatedsvst1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1263core::core_arch::aarch64::sve::generatedsvst1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1264core::core_arch::aarch64::sve::generatedsvst1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1265core::core_arch::aarch64::sve::generatedsvst1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1266core::core_arch::aarch64::sve::generatedsvst1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1267core::core_arch::aarch64::sve::generatedsvst1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1268core::core_arch::aarch64::sve::generatedsvst1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1269core::core_arch::aarch64::sve::generatedsvst1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1270core::core_arch::aarch64::sve::generatedsvst1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1271core::core_arch::aarch64::sve::generatedsvst1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1272core::core_arch::aarch64::sve::generatedsvst1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1273core::core_arch::aarch64::sve::generatedsvst1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1274core::core_arch::aarch64::sve::generatedsvst1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1275core::core_arch::aarch64::sve::generatedsvst1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1276core::core_arch::aarch64::sve::generatedsvst1b_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1277core::core_arch::aarch64::sve::generatedsvst1b_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1278core::core_arch::aarch64::sve::generatedsvst1b_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1279core::core_arch::aarch64::sve::generatedsvst1b_scatter_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1280core::core_arch::aarch64::sve::generatedsvst1b_scatter_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1281core::core_arch::aarch64::sve::generatedsvst1b_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1282core::core_arch::aarch64::sve::generatedsvst1b_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1283core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1284core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1285core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1286core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1287core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1288core::core_arch::aarch64::sve::generatedsvst1b_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1289core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1290core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1291core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1292core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1293core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1294core::core_arch::aarch64::sve::generatedsvst1b_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1295core::core_arch::aarch64::sve::generatedsvst1b_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1296core::core_arch::aarch64::sve::generatedsvst1b_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1297core::core_arch::aarch64::sve::generatedsvst1b_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1298core::core_arch::aarch64::sve::generatedsvst1b_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1299core::core_arch::aarch64::sve::generatedsvst1b_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1300core::core_arch::aarch64::sve::generatedsvst1b_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1301core::core_arch::aarch64::sve::generatedsvst1b_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1302core::core_arch::aarch64::sve::generatedsvst1b_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1303core::core_arch::aarch64::sve::generatedsvst1b_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1304core::core_arch::aarch64::sve::generatedsvst1h_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1305core::core_arch::aarch64::sve::generatedsvst1h_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1306core::core_arch::aarch64::sve::generatedsvst1h_scatter_s32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1307core::core_arch::aarch64::sve::generatedsvst1h_scatter_s32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1308core::core_arch::aarch64::sve::generatedsvst1h_scatter_s32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1309core::core_arch::aarch64::sve::generatedsvst1h_scatter_s32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1310core::core_arch::aarch64::sve::generatedsvst1h_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1311core::core_arch::aarch64::sve::generatedsvst1h_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1312core::core_arch::aarch64::sve::generatedsvst1h_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1313core::core_arch::aarch64::sve::generatedsvst1h_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1314core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1315core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1316core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1317core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1318core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1319core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32base_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1320core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32index_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1321core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32index_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1322core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32offset_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1323core::core_arch::aarch64::sve::generatedsvst1h_scatter_u32offset_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1324core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1325core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1326core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1327core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1328core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1329core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1330core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1331core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1332core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1333core::core_arch::aarch64::sve::generatedsvst1h_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1334core::core_arch::aarch64::sve::generatedsvst1h_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1335core::core_arch::aarch64::sve::generatedsvst1h_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1336core::core_arch::aarch64::sve::generatedsvst1h_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1337core::core_arch::aarch64::sve::generatedsvst1h_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1338core::core_arch::aarch64::sve::generatedsvst1h_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1339core::core_arch::aarch64::sve::generatedsvst1h_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1340core::core_arch::aarch64::sve::generatedsvst1w_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1341core::core_arch::aarch64::sve::generatedsvst1w_scatter_s64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1342core::core_arch::aarch64::sve::generatedsvst1w_scatter_s64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1343core::core_arch::aarch64::sve::generatedsvst1w_scatter_s64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1344core::core_arch::aarch64::sve::generatedsvst1w_scatter_s64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1345core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1346core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1347core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1348core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1349core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1350core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64base_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Addresses passed in `bases` lack provenance, so this is similar to using a `usize as ptr` cast (or [`core::ptr::with_exposed_provenance`]) on each lane before using it.
1351core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64index_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1352core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64index_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1353core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64offset_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1354core::core_arch::aarch64::sve::generatedsvst1w_scatter_u64offset_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1355core::core_arch::aarch64::sve::generatedsvst1w_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1356core::core_arch::aarch64::sve::generatedsvst1w_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1357core::core_arch::aarch64::sve::generatedsvst1w_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1358core::core_arch::aarch64::sve::generatedsvst2_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1359core::core_arch::aarch64::sve::generatedsvst2_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1360core::core_arch::aarch64::sve::generatedsvst2_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1361core::core_arch::aarch64::sve::generatedsvst2_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1362core::core_arch::aarch64::sve::generatedsvst2_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1363core::core_arch::aarch64::sve::generatedsvst2_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1364core::core_arch::aarch64::sve::generatedsvst2_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1365core::core_arch::aarch64::sve::generatedsvst2_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1366core::core_arch::aarch64::sve::generatedsvst2_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1367core::core_arch::aarch64::sve::generatedsvst2_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1368core::core_arch::aarch64::sve::generatedsvst2_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1369core::core_arch::aarch64::sve::generatedsvst2_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1370core::core_arch::aarch64::sve::generatedsvst2_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1371core::core_arch::aarch64::sve::generatedsvst2_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1372core::core_arch::aarch64::sve::generatedsvst2_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1373core::core_arch::aarch64::sve::generatedsvst2_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1374core::core_arch::aarch64::sve::generatedsvst2_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1375core::core_arch::aarch64::sve::generatedsvst2_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1376core::core_arch::aarch64::sve::generatedsvst2_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1377core::core_arch::aarch64::sve::generatedsvst2_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1378core::core_arch::aarch64::sve::generatedsvst3_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1379core::core_arch::aarch64::sve::generatedsvst3_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1380core::core_arch::aarch64::sve::generatedsvst3_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1381core::core_arch::aarch64::sve::generatedsvst3_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1382core::core_arch::aarch64::sve::generatedsvst3_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1383core::core_arch::aarch64::sve::generatedsvst3_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1384core::core_arch::aarch64::sve::generatedsvst3_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1385core::core_arch::aarch64::sve::generatedsvst3_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1386core::core_arch::aarch64::sve::generatedsvst3_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1387core::core_arch::aarch64::sve::generatedsvst3_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1388core::core_arch::aarch64::sve::generatedsvst3_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1389core::core_arch::aarch64::sve::generatedsvst3_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1390core::core_arch::aarch64::sve::generatedsvst3_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1391core::core_arch::aarch64::sve::generatedsvst3_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1392core::core_arch::aarch64::sve::generatedsvst3_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1393core::core_arch::aarch64::sve::generatedsvst3_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1394core::core_arch::aarch64::sve::generatedsvst3_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1395core::core_arch::aarch64::sve::generatedsvst3_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1396core::core_arch::aarch64::sve::generatedsvst3_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1397core::core_arch::aarch64::sve::generatedsvst3_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1398core::core_arch::aarch64::sve::generatedsvst4_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1399core::core_arch::aarch64::sve::generatedsvst4_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1400core::core_arch::aarch64::sve::generatedsvst4_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1401core::core_arch::aarch64::sve::generatedsvst4_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1402core::core_arch::aarch64::sve::generatedsvst4_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1403core::core_arch::aarch64::sve::generatedsvst4_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1404core::core_arch::aarch64::sve::generatedsvst4_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1405core::core_arch::aarch64::sve::generatedsvst4_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1406core::core_arch::aarch64::sve::generatedsvst4_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1407core::core_arch::aarch64::sve::generatedsvst4_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1408core::core_arch::aarch64::sve::generatedsvst4_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1409core::core_arch::aarch64::sve::generatedsvst4_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1410core::core_arch::aarch64::sve::generatedsvst4_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1411core::core_arch::aarch64::sve::generatedsvst4_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1412core::core_arch::aarch64::sve::generatedsvst4_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1413core::core_arch::aarch64::sve::generatedsvst4_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1414core::core_arch::aarch64::sve::generatedsvst4_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1415core::core_arch::aarch64::sve::generatedsvst4_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1416core::core_arch::aarch64::sve::generatedsvst4_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1417core::core_arch::aarch64::sve::generatedsvst4_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). In particular, note that `vnum` is scaled by the vector length, `VL`, which is not known at compile time. * This dereferences and accesses the calculated address for each active element (governed by `pg`).
1418core::core_arch::aarch64::sve::generatedsvstnt1_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1419core::core_arch::aarch64::sve::generatedsvstnt1_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1420core::core_arch::aarch64::sve::generatedsvstnt1_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1421core::core_arch::aarch64::sve::generatedsvstnt1_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1422core::core_arch::aarch64::sve::generatedsvstnt1_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1423core::core_arch::aarch64::sve::generatedsvstnt1_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1424core::core_arch::aarch64::sve::generatedsvstnt1_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1425core::core_arch::aarch64::sve::generatedsvstnt1_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1426core::core_arch::aarch64::sve::generatedsvstnt1_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1427core::core_arch::aarch64::sve::generatedsvstnt1_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1428core::core_arch::aarch64::sve::generatedsvstnt1_vnum_f32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1429core::core_arch::aarch64::sve::generatedsvstnt1_vnum_f64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1430core::core_arch::aarch64::sve::generatedsvstnt1_vnum_s16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1431core::core_arch::aarch64::sve::generatedsvstnt1_vnum_s32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1432core::core_arch::aarch64::sve::generatedsvstnt1_vnum_s64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1433core::core_arch::aarch64::sve::generatedsvstnt1_vnum_s8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1434core::core_arch::aarch64::sve::generatedsvstnt1_vnum_u16function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1435core::core_arch::aarch64::sve::generatedsvstnt1_vnum_u32function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1436core::core_arch::aarch64::sve::generatedsvstnt1_vnum_u64function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1437core::core_arch::aarch64::sve::generatedsvstnt1_vnum_u8function* [`pointer::offset`](pointer#method.offset) safety constraints must be met for the address calculation for each active element (governed by `pg`). * This dereferences and accesses the calculated address for each active element (governed by `pg`). * Non-temporal accesses have special memory ordering rules, and [explicit barriers may be required for some applications](https://developer.arm.com/documentation/den0024/a/Memory-Ordering/Barriers/Non-temporal-load-and-store-pair?lang=en).
1438core::core_arch::aarch64::sve::generatedsvundef2_f32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1439core::core_arch::aarch64::sve::generatedsvundef2_f64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1440core::core_arch::aarch64::sve::generatedsvundef2_s16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1441core::core_arch::aarch64::sve::generatedsvundef2_s32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1442core::core_arch::aarch64::sve::generatedsvundef2_s64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1443core::core_arch::aarch64::sve::generatedsvundef2_s8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1444core::core_arch::aarch64::sve::generatedsvundef2_u16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1445core::core_arch::aarch64::sve::generatedsvundef2_u32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1446core::core_arch::aarch64::sve::generatedsvundef2_u64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1447core::core_arch::aarch64::sve::generatedsvundef2_u8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1448core::core_arch::aarch64::sve::generatedsvundef3_f32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1449core::core_arch::aarch64::sve::generatedsvundef3_f64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1450core::core_arch::aarch64::sve::generatedsvundef3_s16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1451core::core_arch::aarch64::sve::generatedsvundef3_s32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1452core::core_arch::aarch64::sve::generatedsvundef3_s64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1453core::core_arch::aarch64::sve::generatedsvundef3_s8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1454core::core_arch::aarch64::sve::generatedsvundef3_u16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1455core::core_arch::aarch64::sve::generatedsvundef3_u32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1456core::core_arch::aarch64::sve::generatedsvundef3_u64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1457core::core_arch::aarch64::sve::generatedsvundef3_u8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1458core::core_arch::aarch64::sve::generatedsvundef4_f32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1459core::core_arch::aarch64::sve::generatedsvundef4_f64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1460core::core_arch::aarch64::sve::generatedsvundef4_s16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1461core::core_arch::aarch64::sve::generatedsvundef4_s32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1462core::core_arch::aarch64::sve::generatedsvundef4_s64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1463core::core_arch::aarch64::sve::generatedsvundef4_s8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1464core::core_arch::aarch64::sve::generatedsvundef4_u16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1465core::core_arch::aarch64::sve::generatedsvundef4_u32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1466core::core_arch::aarch64::sve::generatedsvundef4_u64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1467core::core_arch::aarch64::sve::generatedsvundef4_u8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1468core::core_arch::aarch64::sve::generatedsvundef_f32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1469core::core_arch::aarch64::sve::generatedsvundef_f64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1470core::core_arch::aarch64::sve::generatedsvundef_s16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1471core::core_arch::aarch64::sve::generatedsvundef_s32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1472core::core_arch::aarch64::sve::generatedsvundef_s64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1473core::core_arch::aarch64::sve::generatedsvundef_s8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1474core::core_arch::aarch64::sve::generatedsvundef_u16function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1475core::core_arch::aarch64::sve::generatedsvundef_u32function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1476core::core_arch::aarch64::sve::generatedsvundef_u64function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1477core::core_arch::aarch64::sve::generatedsvundef_u8function* This creates an uninitialized value, and may be unsound (like [`core::mem::uninitialized`]).
1478core::core_arch::amdgpuds_bpermutefunction
1479core::core_arch::amdgpuds_permutefunction
1480core::core_arch::amdgpupermfunction
1481core::core_arch::amdgpupermlane16_swapfunction
1482core::core_arch::amdgpupermlane16_u32function
1483core::core_arch::amdgpupermlane16_varfunction
1484core::core_arch::amdgpupermlane32_swapfunction
1485core::core_arch::amdgpupermlane64_u32function
1486core::core_arch::amdgpupermlanex16_u32function
1487core::core_arch::amdgpupermlanex16_varfunction
1488core::core_arch::amdgpureadlane_u32function
1489core::core_arch::amdgpureadlane_u64function
1490core::core_arch::amdgpus_barrier_signalfunction
1491core::core_arch::amdgpus_barrier_signal_isfirstfunction
1492core::core_arch::amdgpus_barrier_waitfunction
1493core::core_arch::amdgpus_get_barrier_statefunction
1494core::core_arch::amdgpusched_barrierfunction
1495core::core_arch::amdgpusched_group_barrierfunction
1496core::core_arch::amdgpuupdate_dppfunction
1497core::core_arch::amdgpuwritelane_u32function
1498core::core_arch::amdgpuwritelane_u64function
1499core::core_arch::arm::dsp__qaddfunction
1500core::core_arch::arm::dsp__qdblfunction
1501core::core_arch::arm::dsp__qsubfunction
1502core::core_arch::arm::dsp__smlabbfunction
1503core::core_arch::arm::dsp__smlabtfunction
1504core::core_arch::arm::dsp__smlatbfunction
1505core::core_arch::arm::dsp__smlattfunction
1506core::core_arch::arm::dsp__smlawbfunction
1507core::core_arch::arm::dsp__smlawtfunction
1508core::core_arch::arm::dsp__smulbbfunction
1509core::core_arch::arm::dsp__smulbtfunction
1510core::core_arch::arm::dsp__smultbfunction
1511core::core_arch::arm::dsp__smulttfunction
1512core::core_arch::arm::dsp__smulwbfunction
1513core::core_arch::arm::dsp__smulwtfunction
1514core::core_arch::arm::sat__ssatfunction
1515core::core_arch::arm::sat__usatfunction
1516core::core_arch::arm::simd32__qadd16function
1517core::core_arch::arm::simd32__qadd8function
1518core::core_arch::arm::simd32__qasxfunction
1519core::core_arch::arm::simd32__qsaxfunction
1520core::core_arch::arm::simd32__qsub16function
1521core::core_arch::arm::simd32__qsub8function
1522core::core_arch::arm::simd32__sadd16function
1523core::core_arch::arm::simd32__sadd8function
1524core::core_arch::arm::simd32__sasxfunction
1525core::core_arch::arm::simd32__selfunction
1526core::core_arch::arm::simd32__shadd16function
1527core::core_arch::arm::simd32__shadd8function
1528core::core_arch::arm::simd32__shsub16function
1529core::core_arch::arm::simd32__shsub8function
1530core::core_arch::arm::simd32__smladfunction
1531core::core_arch::arm::simd32__smlsdfunction
1532core::core_arch::arm::simd32__smuadfunction
1533core::core_arch::arm::simd32__smuadxfunction
1534core::core_arch::arm::simd32__smusdfunction
1535core::core_arch::arm::simd32__smusdxfunction
1536core::core_arch::arm::simd32__ssub8function
1537core::core_arch::arm::simd32__usad8function
1538core::core_arch::arm::simd32__usada8function
1539core::core_arch::arm::simd32__usub8function
1540core::core_arch::arm_shared::barrier__dmbfunction
1541core::core_arch::arm_shared::barrier__dsbfunction
1542core::core_arch::arm_shared::barrier__isbfunction
1543core::core_arch::arm_shared::hints__nopfunction
1544core::core_arch::arm_shared::hints__sevfunction
1545core::core_arch::arm_shared::hints__sevlfunction
1546core::core_arch::arm_shared::hints__wfefunction
1547core::core_arch::arm_shared::hints__wfifunction
1548core::core_arch::arm_shared::hints__yieldfunction
1549core::core_arch::arm_shared::neon::generatedvext_s64function* Neon intrinsic unsafe
1550core::core_arch::arm_shared::neon::generatedvext_u64function* Neon intrinsic unsafe
1551core::core_arch::arm_shared::neon::generatedvld1_dup_f16function* Neon intrinsic unsafe
1552core::core_arch::arm_shared::neon::generatedvld1_dup_f32function* Neon intrinsic unsafe
1553core::core_arch::arm_shared::neon::generatedvld1_dup_p16function* Neon intrinsic unsafe
1554core::core_arch::arm_shared::neon::generatedvld1_dup_p64function* Neon intrinsic unsafe
1555core::core_arch::arm_shared::neon::generatedvld1_dup_p8function* Neon intrinsic unsafe
1556core::core_arch::arm_shared::neon::generatedvld1_dup_s16function* Neon intrinsic unsafe
1557core::core_arch::arm_shared::neon::generatedvld1_dup_s32function* Neon intrinsic unsafe
1558core::core_arch::arm_shared::neon::generatedvld1_dup_s64function* Neon intrinsic unsafe
1559core::core_arch::arm_shared::neon::generatedvld1_dup_s8function* Neon intrinsic unsafe
1560core::core_arch::arm_shared::neon::generatedvld1_dup_u16function* Neon intrinsic unsafe
1561core::core_arch::arm_shared::neon::generatedvld1_dup_u32function* Neon intrinsic unsafe
1562core::core_arch::arm_shared::neon::generatedvld1_dup_u64function* Neon intrinsic unsafe
1563core::core_arch::arm_shared::neon::generatedvld1_dup_u8function* Neon intrinsic unsafe
1564core::core_arch::arm_shared::neon::generatedvld1_f16_x2function* Neon intrinsic unsafe
1565core::core_arch::arm_shared::neon::generatedvld1_f16_x3function* Neon intrinsic unsafe
1566core::core_arch::arm_shared::neon::generatedvld1_f16_x4function* Neon intrinsic unsafe
1567core::core_arch::arm_shared::neon::generatedvld1_f32_x2function* Neon intrinsic unsafe
1568core::core_arch::arm_shared::neon::generatedvld1_f32_x3function* Neon intrinsic unsafe
1569core::core_arch::arm_shared::neon::generatedvld1_f32_x4function* Neon intrinsic unsafe
1570core::core_arch::arm_shared::neon::generatedvld1_lane_f16function* Neon intrinsic unsafe
1571core::core_arch::arm_shared::neon::generatedvld1_lane_f32function* Neon intrinsic unsafe
1572core::core_arch::arm_shared::neon::generatedvld1_lane_p16function* Neon intrinsic unsafe
1573core::core_arch::arm_shared::neon::generatedvld1_lane_p64function* Neon intrinsic unsafe
1574core::core_arch::arm_shared::neon::generatedvld1_lane_p8function* Neon intrinsic unsafe
1575core::core_arch::arm_shared::neon::generatedvld1_lane_s16function* Neon intrinsic unsafe
1576core::core_arch::arm_shared::neon::generatedvld1_lane_s32function* Neon intrinsic unsafe
1577core::core_arch::arm_shared::neon::generatedvld1_lane_s64function* Neon intrinsic unsafe
1578core::core_arch::arm_shared::neon::generatedvld1_lane_s8function* Neon intrinsic unsafe
1579core::core_arch::arm_shared::neon::generatedvld1_lane_u16function* Neon intrinsic unsafe
1580core::core_arch::arm_shared::neon::generatedvld1_lane_u32function* Neon intrinsic unsafe
1581core::core_arch::arm_shared::neon::generatedvld1_lane_u64function* Neon intrinsic unsafe
1582core::core_arch::arm_shared::neon::generatedvld1_lane_u8function* Neon intrinsic unsafe
1583core::core_arch::arm_shared::neon::generatedvld1_p16_x2function* Neon intrinsic unsafe
1584core::core_arch::arm_shared::neon::generatedvld1_p16_x3function* Neon intrinsic unsafe
1585core::core_arch::arm_shared::neon::generatedvld1_p16_x4function* Neon intrinsic unsafe
1586core::core_arch::arm_shared::neon::generatedvld1_p64_x2function* Neon intrinsic unsafe
1587core::core_arch::arm_shared::neon::generatedvld1_p64_x3function* Neon intrinsic unsafe
1588core::core_arch::arm_shared::neon::generatedvld1_p64_x4function* Neon intrinsic unsafe
1589core::core_arch::arm_shared::neon::generatedvld1_p8_x2function* Neon intrinsic unsafe
1590core::core_arch::arm_shared::neon::generatedvld1_p8_x3function* Neon intrinsic unsafe
1591core::core_arch::arm_shared::neon::generatedvld1_p8_x4function* Neon intrinsic unsafe
1592core::core_arch::arm_shared::neon::generatedvld1_s16_x2function* Neon intrinsic unsafe
1593core::core_arch::arm_shared::neon::generatedvld1_s16_x3function* Neon intrinsic unsafe
1594core::core_arch::arm_shared::neon::generatedvld1_s16_x4function* Neon intrinsic unsafe
1595core::core_arch::arm_shared::neon::generatedvld1_s32_x2function* Neon intrinsic unsafe
1596core::core_arch::arm_shared::neon::generatedvld1_s32_x3function* Neon intrinsic unsafe
1597core::core_arch::arm_shared::neon::generatedvld1_s32_x4function* Neon intrinsic unsafe
1598core::core_arch::arm_shared::neon::generatedvld1_s64_x2function* Neon intrinsic unsafe
1599core::core_arch::arm_shared::neon::generatedvld1_s64_x3function* Neon intrinsic unsafe
1600core::core_arch::arm_shared::neon::generatedvld1_s64_x4function* Neon intrinsic unsafe
1601core::core_arch::arm_shared::neon::generatedvld1_s8_x2function* Neon intrinsic unsafe
1602core::core_arch::arm_shared::neon::generatedvld1_s8_x3function* Neon intrinsic unsafe
1603core::core_arch::arm_shared::neon::generatedvld1_s8_x4function* Neon intrinsic unsafe
1604core::core_arch::arm_shared::neon::generatedvld1_u16_x2function* Neon intrinsic unsafe
1605core::core_arch::arm_shared::neon::generatedvld1_u16_x3function* Neon intrinsic unsafe
1606core::core_arch::arm_shared::neon::generatedvld1_u16_x4function* Neon intrinsic unsafe
1607core::core_arch::arm_shared::neon::generatedvld1_u32_x2function* Neon intrinsic unsafe
1608core::core_arch::arm_shared::neon::generatedvld1_u32_x3function* Neon intrinsic unsafe
1609core::core_arch::arm_shared::neon::generatedvld1_u32_x4function* Neon intrinsic unsafe
1610core::core_arch::arm_shared::neon::generatedvld1_u64_x2function* Neon intrinsic unsafe
1611core::core_arch::arm_shared::neon::generatedvld1_u64_x3function* Neon intrinsic unsafe
1612core::core_arch::arm_shared::neon::generatedvld1_u64_x4function* Neon intrinsic unsafe
1613core::core_arch::arm_shared::neon::generatedvld1_u8_x2function* Neon intrinsic unsafe
1614core::core_arch::arm_shared::neon::generatedvld1_u8_x3function* Neon intrinsic unsafe
1615core::core_arch::arm_shared::neon::generatedvld1_u8_x4function* Neon intrinsic unsafe
1616core::core_arch::arm_shared::neon::generatedvld1q_dup_f16function* Neon intrinsic unsafe
1617core::core_arch::arm_shared::neon::generatedvld1q_dup_f32function* Neon intrinsic unsafe
1618core::core_arch::arm_shared::neon::generatedvld1q_dup_p16function* Neon intrinsic unsafe
1619core::core_arch::arm_shared::neon::generatedvld1q_dup_p64function* Neon intrinsic unsafe
1620core::core_arch::arm_shared::neon::generatedvld1q_dup_p8function* Neon intrinsic unsafe
1621core::core_arch::arm_shared::neon::generatedvld1q_dup_s16function* Neon intrinsic unsafe
1622core::core_arch::arm_shared::neon::generatedvld1q_dup_s32function* Neon intrinsic unsafe
1623core::core_arch::arm_shared::neon::generatedvld1q_dup_s64function* Neon intrinsic unsafe
1624core::core_arch::arm_shared::neon::generatedvld1q_dup_s8function* Neon intrinsic unsafe
1625core::core_arch::arm_shared::neon::generatedvld1q_dup_u16function* Neon intrinsic unsafe
1626core::core_arch::arm_shared::neon::generatedvld1q_dup_u32function* Neon intrinsic unsafe
1627core::core_arch::arm_shared::neon::generatedvld1q_dup_u64function* Neon intrinsic unsafe
1628core::core_arch::arm_shared::neon::generatedvld1q_dup_u8function* Neon intrinsic unsafe
1629core::core_arch::arm_shared::neon::generatedvld1q_f16_x2function* Neon intrinsic unsafe
1630core::core_arch::arm_shared::neon::generatedvld1q_f16_x3function* Neon intrinsic unsafe
1631core::core_arch::arm_shared::neon::generatedvld1q_f16_x4function* Neon intrinsic unsafe
1632core::core_arch::arm_shared::neon::generatedvld1q_f32_x2function* Neon intrinsic unsafe
1633core::core_arch::arm_shared::neon::generatedvld1q_f32_x3function* Neon intrinsic unsafe
1634core::core_arch::arm_shared::neon::generatedvld1q_f32_x4function* Neon intrinsic unsafe
1635core::core_arch::arm_shared::neon::generatedvld1q_lane_f16function* Neon intrinsic unsafe
1636core::core_arch::arm_shared::neon::generatedvld1q_lane_f32function* Neon intrinsic unsafe
1637core::core_arch::arm_shared::neon::generatedvld1q_lane_p16function* Neon intrinsic unsafe
1638core::core_arch::arm_shared::neon::generatedvld1q_lane_p64function* Neon intrinsic unsafe
1639core::core_arch::arm_shared::neon::generatedvld1q_lane_p8function* Neon intrinsic unsafe
1640core::core_arch::arm_shared::neon::generatedvld1q_lane_s16function* Neon intrinsic unsafe
1641core::core_arch::arm_shared::neon::generatedvld1q_lane_s32function* Neon intrinsic unsafe
1642core::core_arch::arm_shared::neon::generatedvld1q_lane_s64function* Neon intrinsic unsafe
1643core::core_arch::arm_shared::neon::generatedvld1q_lane_s8function* Neon intrinsic unsafe
1644core::core_arch::arm_shared::neon::generatedvld1q_lane_u16function* Neon intrinsic unsafe
1645core::core_arch::arm_shared::neon::generatedvld1q_lane_u32function* Neon intrinsic unsafe
1646core::core_arch::arm_shared::neon::generatedvld1q_lane_u64function* Neon intrinsic unsafe
1647core::core_arch::arm_shared::neon::generatedvld1q_lane_u8function* Neon intrinsic unsafe
1648core::core_arch::arm_shared::neon::generatedvld1q_p16_x2function* Neon intrinsic unsafe
1649core::core_arch::arm_shared::neon::generatedvld1q_p16_x3function* Neon intrinsic unsafe
1650core::core_arch::arm_shared::neon::generatedvld1q_p16_x4function* Neon intrinsic unsafe
1651core::core_arch::arm_shared::neon::generatedvld1q_p64_x2function* Neon intrinsic unsafe
1652core::core_arch::arm_shared::neon::generatedvld1q_p64_x3function* Neon intrinsic unsafe
1653core::core_arch::arm_shared::neon::generatedvld1q_p64_x4function* Neon intrinsic unsafe
1654core::core_arch::arm_shared::neon::generatedvld1q_p8_x2function* Neon intrinsic unsafe
1655core::core_arch::arm_shared::neon::generatedvld1q_p8_x3function* Neon intrinsic unsafe
1656core::core_arch::arm_shared::neon::generatedvld1q_p8_x4function* Neon intrinsic unsafe
1657core::core_arch::arm_shared::neon::generatedvld1q_s16_x2function* Neon intrinsic unsafe
1658core::core_arch::arm_shared::neon::generatedvld1q_s16_x3function* Neon intrinsic unsafe
1659core::core_arch::arm_shared::neon::generatedvld1q_s16_x4function* Neon intrinsic unsafe
1660core::core_arch::arm_shared::neon::generatedvld1q_s32_x2function* Neon intrinsic unsafe
1661core::core_arch::arm_shared::neon::generatedvld1q_s32_x3function* Neon intrinsic unsafe
1662core::core_arch::arm_shared::neon::generatedvld1q_s32_x4function* Neon intrinsic unsafe
1663core::core_arch::arm_shared::neon::generatedvld1q_s64_x2function* Neon intrinsic unsafe
1664core::core_arch::arm_shared::neon::generatedvld1q_s64_x3function* Neon intrinsic unsafe
1665core::core_arch::arm_shared::neon::generatedvld1q_s64_x4function* Neon intrinsic unsafe
1666core::core_arch::arm_shared::neon::generatedvld1q_s8_x2function* Neon intrinsic unsafe
1667core::core_arch::arm_shared::neon::generatedvld1q_s8_x3function* Neon intrinsic unsafe
1668core::core_arch::arm_shared::neon::generatedvld1q_s8_x4function* Neon intrinsic unsafe
1669core::core_arch::arm_shared::neon::generatedvld1q_u16_x2function* Neon intrinsic unsafe
1670core::core_arch::arm_shared::neon::generatedvld1q_u16_x3function* Neon intrinsic unsafe
1671core::core_arch::arm_shared::neon::generatedvld1q_u16_x4function* Neon intrinsic unsafe
1672core::core_arch::arm_shared::neon::generatedvld1q_u32_x2function* Neon intrinsic unsafe
1673core::core_arch::arm_shared::neon::generatedvld1q_u32_x3function* Neon intrinsic unsafe
1674core::core_arch::arm_shared::neon::generatedvld1q_u32_x4function* Neon intrinsic unsafe
1675core::core_arch::arm_shared::neon::generatedvld1q_u64_x2function* Neon intrinsic unsafe
1676core::core_arch::arm_shared::neon::generatedvld1q_u64_x3function* Neon intrinsic unsafe
1677core::core_arch::arm_shared::neon::generatedvld1q_u64_x4function* Neon intrinsic unsafe
1678core::core_arch::arm_shared::neon::generatedvld1q_u8_x2function* Neon intrinsic unsafe
1679core::core_arch::arm_shared::neon::generatedvld1q_u8_x3function* Neon intrinsic unsafe
1680core::core_arch::arm_shared::neon::generatedvld1q_u8_x4function* Neon intrinsic unsafe
1681core::core_arch::arm_shared::neon::generatedvld2_dup_f16function* Neon intrinsic unsafe
1682core::core_arch::arm_shared::neon::generatedvld2_dup_f32function* Neon intrinsic unsafe
1683core::core_arch::arm_shared::neon::generatedvld2_dup_p16function* Neon intrinsic unsafe
1684core::core_arch::arm_shared::neon::generatedvld2_dup_p64function* Neon intrinsic unsafe
1685core::core_arch::arm_shared::neon::generatedvld2_dup_p8function* Neon intrinsic unsafe
1686core::core_arch::arm_shared::neon::generatedvld2_dup_s16function* Neon intrinsic unsafe
1687core::core_arch::arm_shared::neon::generatedvld2_dup_s32function* Neon intrinsic unsafe
1688core::core_arch::arm_shared::neon::generatedvld2_dup_s64function* Neon intrinsic unsafe
1689core::core_arch::arm_shared::neon::generatedvld2_dup_s8function* Neon intrinsic unsafe
1690core::core_arch::arm_shared::neon::generatedvld2_dup_u16function* Neon intrinsic unsafe
1691core::core_arch::arm_shared::neon::generatedvld2_dup_u32function* Neon intrinsic unsafe
1692core::core_arch::arm_shared::neon::generatedvld2_dup_u64function* Neon intrinsic unsafe
1693core::core_arch::arm_shared::neon::generatedvld2_dup_u8function* Neon intrinsic unsafe
1694core::core_arch::arm_shared::neon::generatedvld2_f16function* Neon intrinsic unsafe
1695core::core_arch::arm_shared::neon::generatedvld2_f32function* Neon intrinsic unsafe
1696core::core_arch::arm_shared::neon::generatedvld2_lane_f16function* Neon intrinsic unsafe
1697core::core_arch::arm_shared::neon::generatedvld2_lane_f32function* Neon intrinsic unsafe
1698core::core_arch::arm_shared::neon::generatedvld2_lane_p16function* Neon intrinsic unsafe
1699core::core_arch::arm_shared::neon::generatedvld2_lane_p8function* Neon intrinsic unsafe
1700core::core_arch::arm_shared::neon::generatedvld2_lane_s16function* Neon intrinsic unsafe
1701core::core_arch::arm_shared::neon::generatedvld2_lane_s32function* Neon intrinsic unsafe
1702core::core_arch::arm_shared::neon::generatedvld2_lane_s8function* Neon intrinsic unsafe
1703core::core_arch::arm_shared::neon::generatedvld2_lane_u16function* Neon intrinsic unsafe
1704core::core_arch::arm_shared::neon::generatedvld2_lane_u32function* Neon intrinsic unsafe
1705core::core_arch::arm_shared::neon::generatedvld2_lane_u8function* Neon intrinsic unsafe
1706core::core_arch::arm_shared::neon::generatedvld2_p16function* Neon intrinsic unsafe
1707core::core_arch::arm_shared::neon::generatedvld2_p64function* Neon intrinsic unsafe
1708core::core_arch::arm_shared::neon::generatedvld2_p8function* Neon intrinsic unsafe
1709core::core_arch::arm_shared::neon::generatedvld2_s16function* Neon intrinsic unsafe
1710core::core_arch::arm_shared::neon::generatedvld2_s32function* Neon intrinsic unsafe
1711core::core_arch::arm_shared::neon::generatedvld2_s64function* Neon intrinsic unsafe
1712core::core_arch::arm_shared::neon::generatedvld2_s8function* Neon intrinsic unsafe
1713core::core_arch::arm_shared::neon::generatedvld2_u16function* Neon intrinsic unsafe
1714core::core_arch::arm_shared::neon::generatedvld2_u32function* Neon intrinsic unsafe
1715core::core_arch::arm_shared::neon::generatedvld2_u64function* Neon intrinsic unsafe
1716core::core_arch::arm_shared::neon::generatedvld2_u8function* Neon intrinsic unsafe
1717core::core_arch::arm_shared::neon::generatedvld2q_dup_f16function* Neon intrinsic unsafe
1718core::core_arch::arm_shared::neon::generatedvld2q_dup_f32function* Neon intrinsic unsafe
1719core::core_arch::arm_shared::neon::generatedvld2q_dup_p16function* Neon intrinsic unsafe
1720core::core_arch::arm_shared::neon::generatedvld2q_dup_p8function* Neon intrinsic unsafe
1721core::core_arch::arm_shared::neon::generatedvld2q_dup_s16function* Neon intrinsic unsafe
1722core::core_arch::arm_shared::neon::generatedvld2q_dup_s32function* Neon intrinsic unsafe
1723core::core_arch::arm_shared::neon::generatedvld2q_dup_s8function* Neon intrinsic unsafe
1724core::core_arch::arm_shared::neon::generatedvld2q_dup_u16function* Neon intrinsic unsafe
1725core::core_arch::arm_shared::neon::generatedvld2q_dup_u32function* Neon intrinsic unsafe
1726core::core_arch::arm_shared::neon::generatedvld2q_dup_u8function* Neon intrinsic unsafe
1727core::core_arch::arm_shared::neon::generatedvld2q_f16function* Neon intrinsic unsafe
1728core::core_arch::arm_shared::neon::generatedvld2q_f32function* Neon intrinsic unsafe
1729core::core_arch::arm_shared::neon::generatedvld2q_lane_f16function* Neon intrinsic unsafe
1730core::core_arch::arm_shared::neon::generatedvld2q_lane_f32function* Neon intrinsic unsafe
1731core::core_arch::arm_shared::neon::generatedvld2q_lane_p16function* Neon intrinsic unsafe
1732core::core_arch::arm_shared::neon::generatedvld2q_lane_s16function* Neon intrinsic unsafe
1733core::core_arch::arm_shared::neon::generatedvld2q_lane_s32function* Neon intrinsic unsafe
1734core::core_arch::arm_shared::neon::generatedvld2q_lane_u16function* Neon intrinsic unsafe
1735core::core_arch::arm_shared::neon::generatedvld2q_lane_u32function* Neon intrinsic unsafe
1736core::core_arch::arm_shared::neon::generatedvld2q_p16function* Neon intrinsic unsafe
1737core::core_arch::arm_shared::neon::generatedvld2q_p8function* Neon intrinsic unsafe
1738core::core_arch::arm_shared::neon::generatedvld2q_s16function* Neon intrinsic unsafe
1739core::core_arch::arm_shared::neon::generatedvld2q_s32function* Neon intrinsic unsafe
1740core::core_arch::arm_shared::neon::generatedvld2q_s8function* Neon intrinsic unsafe
1741core::core_arch::arm_shared::neon::generatedvld2q_u16function* Neon intrinsic unsafe
1742core::core_arch::arm_shared::neon::generatedvld2q_u32function* Neon intrinsic unsafe
1743core::core_arch::arm_shared::neon::generatedvld2q_u8function* Neon intrinsic unsafe
1744core::core_arch::arm_shared::neon::generatedvld3_dup_f16function* Neon intrinsic unsafe
1745core::core_arch::arm_shared::neon::generatedvld3_dup_f32function* Neon intrinsic unsafe
1746core::core_arch::arm_shared::neon::generatedvld3_dup_p16function* Neon intrinsic unsafe
1747core::core_arch::arm_shared::neon::generatedvld3_dup_p64function* Neon intrinsic unsafe
1748core::core_arch::arm_shared::neon::generatedvld3_dup_p8function* Neon intrinsic unsafe
1749core::core_arch::arm_shared::neon::generatedvld3_dup_s16function* Neon intrinsic unsafe
1750core::core_arch::arm_shared::neon::generatedvld3_dup_s32function* Neon intrinsic unsafe
1751core::core_arch::arm_shared::neon::generatedvld3_dup_s64function* Neon intrinsic unsafe
1752core::core_arch::arm_shared::neon::generatedvld3_dup_s8function* Neon intrinsic unsafe
1753core::core_arch::arm_shared::neon::generatedvld3_dup_u16function* Neon intrinsic unsafe
1754core::core_arch::arm_shared::neon::generatedvld3_dup_u32function* Neon intrinsic unsafe
1755core::core_arch::arm_shared::neon::generatedvld3_dup_u64function* Neon intrinsic unsafe
1756core::core_arch::arm_shared::neon::generatedvld3_dup_u8function* Neon intrinsic unsafe
1757core::core_arch::arm_shared::neon::generatedvld3_f16function* Neon intrinsic unsafe
1758core::core_arch::arm_shared::neon::generatedvld3_f32function* Neon intrinsic unsafe
1759core::core_arch::arm_shared::neon::generatedvld3_lane_f16function* Neon intrinsic unsafe
1760core::core_arch::arm_shared::neon::generatedvld3_lane_f32function* Neon intrinsic unsafe
1761core::core_arch::arm_shared::neon::generatedvld3_lane_p16function* Neon intrinsic unsafe
1762core::core_arch::arm_shared::neon::generatedvld3_lane_p8function* Neon intrinsic unsafe
1763core::core_arch::arm_shared::neon::generatedvld3_lane_s16function* Neon intrinsic unsafe
1764core::core_arch::arm_shared::neon::generatedvld3_lane_s32function* Neon intrinsic unsafe
1765core::core_arch::arm_shared::neon::generatedvld3_lane_s8function* Neon intrinsic unsafe
1766core::core_arch::arm_shared::neon::generatedvld3_lane_u16function* Neon intrinsic unsafe
1767core::core_arch::arm_shared::neon::generatedvld3_lane_u32function* Neon intrinsic unsafe
1768core::core_arch::arm_shared::neon::generatedvld3_lane_u8function* Neon intrinsic unsafe
1769core::core_arch::arm_shared::neon::generatedvld3_p16function* Neon intrinsic unsafe
1770core::core_arch::arm_shared::neon::generatedvld3_p64function* Neon intrinsic unsafe
1771core::core_arch::arm_shared::neon::generatedvld3_p8function* Neon intrinsic unsafe
1772core::core_arch::arm_shared::neon::generatedvld3_s16function* Neon intrinsic unsafe
1773core::core_arch::arm_shared::neon::generatedvld3_s32function* Neon intrinsic unsafe
1774core::core_arch::arm_shared::neon::generatedvld3_s64function* Neon intrinsic unsafe
1775core::core_arch::arm_shared::neon::generatedvld3_s8function* Neon intrinsic unsafe
1776core::core_arch::arm_shared::neon::generatedvld3_u16function* Neon intrinsic unsafe
1777core::core_arch::arm_shared::neon::generatedvld3_u32function* Neon intrinsic unsafe
1778core::core_arch::arm_shared::neon::generatedvld3_u64function* Neon intrinsic unsafe
1779core::core_arch::arm_shared::neon::generatedvld3_u8function* Neon intrinsic unsafe
1780core::core_arch::arm_shared::neon::generatedvld3q_dup_f16function* Neon intrinsic unsafe
1781core::core_arch::arm_shared::neon::generatedvld3q_dup_f32function* Neon intrinsic unsafe
1782core::core_arch::arm_shared::neon::generatedvld3q_dup_p16function* Neon intrinsic unsafe
1783core::core_arch::arm_shared::neon::generatedvld3q_dup_p8function* Neon intrinsic unsafe
1784core::core_arch::arm_shared::neon::generatedvld3q_dup_s16function* Neon intrinsic unsafe
1785core::core_arch::arm_shared::neon::generatedvld3q_dup_s32function* Neon intrinsic unsafe
1786core::core_arch::arm_shared::neon::generatedvld3q_dup_s8function* Neon intrinsic unsafe
1787core::core_arch::arm_shared::neon::generatedvld3q_dup_u16function* Neon intrinsic unsafe
1788core::core_arch::arm_shared::neon::generatedvld3q_dup_u32function* Neon intrinsic unsafe
1789core::core_arch::arm_shared::neon::generatedvld3q_dup_u8function* Neon intrinsic unsafe
1790core::core_arch::arm_shared::neon::generatedvld3q_f16function* Neon intrinsic unsafe
1791core::core_arch::arm_shared::neon::generatedvld3q_f32function* Neon intrinsic unsafe
1792core::core_arch::arm_shared::neon::generatedvld3q_lane_f16function* Neon intrinsic unsafe
1793core::core_arch::arm_shared::neon::generatedvld3q_lane_f32function* Neon intrinsic unsafe
1794core::core_arch::arm_shared::neon::generatedvld3q_lane_p16function* Neon intrinsic unsafe
1795core::core_arch::arm_shared::neon::generatedvld3q_lane_s16function* Neon intrinsic unsafe
1796core::core_arch::arm_shared::neon::generatedvld3q_lane_s32function* Neon intrinsic unsafe
1797core::core_arch::arm_shared::neon::generatedvld3q_lane_u16function* Neon intrinsic unsafe
1798core::core_arch::arm_shared::neon::generatedvld3q_lane_u32function* Neon intrinsic unsafe
1799core::core_arch::arm_shared::neon::generatedvld3q_p16function* Neon intrinsic unsafe
1800core::core_arch::arm_shared::neon::generatedvld3q_p8function* Neon intrinsic unsafe
1801core::core_arch::arm_shared::neon::generatedvld3q_s16function* Neon intrinsic unsafe
1802core::core_arch::arm_shared::neon::generatedvld3q_s32function* Neon intrinsic unsafe
1803core::core_arch::arm_shared::neon::generatedvld3q_s8function* Neon intrinsic unsafe
1804core::core_arch::arm_shared::neon::generatedvld3q_u16function* Neon intrinsic unsafe
1805core::core_arch::arm_shared::neon::generatedvld3q_u32function* Neon intrinsic unsafe
1806core::core_arch::arm_shared::neon::generatedvld3q_u8function* Neon intrinsic unsafe
1807core::core_arch::arm_shared::neon::generatedvld4_dup_f16function* Neon intrinsic unsafe
1808core::core_arch::arm_shared::neon::generatedvld4_dup_f32function* Neon intrinsic unsafe
1809core::core_arch::arm_shared::neon::generatedvld4_dup_p16function* Neon intrinsic unsafe
1810core::core_arch::arm_shared::neon::generatedvld4_dup_p64function* Neon intrinsic unsafe
1811core::core_arch::arm_shared::neon::generatedvld4_dup_p8function* Neon intrinsic unsafe
1812core::core_arch::arm_shared::neon::generatedvld4_dup_s16function* Neon intrinsic unsafe
1813core::core_arch::arm_shared::neon::generatedvld4_dup_s32function* Neon intrinsic unsafe
1814core::core_arch::arm_shared::neon::generatedvld4_dup_s64function* Neon intrinsic unsafe
1815core::core_arch::arm_shared::neon::generatedvld4_dup_s8function* Neon intrinsic unsafe
1816core::core_arch::arm_shared::neon::generatedvld4_dup_u16function* Neon intrinsic unsafe
1817core::core_arch::arm_shared::neon::generatedvld4_dup_u32function* Neon intrinsic unsafe
1818core::core_arch::arm_shared::neon::generatedvld4_dup_u64function* Neon intrinsic unsafe
1819core::core_arch::arm_shared::neon::generatedvld4_dup_u8function* Neon intrinsic unsafe
1820core::core_arch::arm_shared::neon::generatedvld4_f16function* Neon intrinsic unsafe
1821core::core_arch::arm_shared::neon::generatedvld4_f32function* Neon intrinsic unsafe
1822core::core_arch::arm_shared::neon::generatedvld4_lane_f16function* Neon intrinsic unsafe
1823core::core_arch::arm_shared::neon::generatedvld4_lane_f32function* Neon intrinsic unsafe
1824core::core_arch::arm_shared::neon::generatedvld4_lane_p16function* Neon intrinsic unsafe
1825core::core_arch::arm_shared::neon::generatedvld4_lane_p8function* Neon intrinsic unsafe
1826core::core_arch::arm_shared::neon::generatedvld4_lane_s16function* Neon intrinsic unsafe
1827core::core_arch::arm_shared::neon::generatedvld4_lane_s32function* Neon intrinsic unsafe
1828core::core_arch::arm_shared::neon::generatedvld4_lane_s8function* Neon intrinsic unsafe
1829core::core_arch::arm_shared::neon::generatedvld4_lane_u16function* Neon intrinsic unsafe
1830core::core_arch::arm_shared::neon::generatedvld4_lane_u32function* Neon intrinsic unsafe
1831core::core_arch::arm_shared::neon::generatedvld4_lane_u8function* Neon intrinsic unsafe
1832core::core_arch::arm_shared::neon::generatedvld4_p16function* Neon intrinsic unsafe
1833core::core_arch::arm_shared::neon::generatedvld4_p64function* Neon intrinsic unsafe
1834core::core_arch::arm_shared::neon::generatedvld4_p8function* Neon intrinsic unsafe
1835core::core_arch::arm_shared::neon::generatedvld4_s16function* Neon intrinsic unsafe
1836core::core_arch::arm_shared::neon::generatedvld4_s32function* Neon intrinsic unsafe
1837core::core_arch::arm_shared::neon::generatedvld4_s64function* Neon intrinsic unsafe
1838core::core_arch::arm_shared::neon::generatedvld4_s8function* Neon intrinsic unsafe
1839core::core_arch::arm_shared::neon::generatedvld4_u16function* Neon intrinsic unsafe
1840core::core_arch::arm_shared::neon::generatedvld4_u32function* Neon intrinsic unsafe
1841core::core_arch::arm_shared::neon::generatedvld4_u64function* Neon intrinsic unsafe
1842core::core_arch::arm_shared::neon::generatedvld4_u8function* Neon intrinsic unsafe
1843core::core_arch::arm_shared::neon::generatedvld4q_dup_f16function* Neon intrinsic unsafe
1844core::core_arch::arm_shared::neon::generatedvld4q_dup_f32function* Neon intrinsic unsafe
1845core::core_arch::arm_shared::neon::generatedvld4q_dup_p16function* Neon intrinsic unsafe
1846core::core_arch::arm_shared::neon::generatedvld4q_dup_p8function* Neon intrinsic unsafe
1847core::core_arch::arm_shared::neon::generatedvld4q_dup_s16function* Neon intrinsic unsafe
1848core::core_arch::arm_shared::neon::generatedvld4q_dup_s32function* Neon intrinsic unsafe
1849core::core_arch::arm_shared::neon::generatedvld4q_dup_s8function* Neon intrinsic unsafe
1850core::core_arch::arm_shared::neon::generatedvld4q_dup_u16function* Neon intrinsic unsafe
1851core::core_arch::arm_shared::neon::generatedvld4q_dup_u32function* Neon intrinsic unsafe
1852core::core_arch::arm_shared::neon::generatedvld4q_dup_u8function* Neon intrinsic unsafe
1853core::core_arch::arm_shared::neon::generatedvld4q_f16function* Neon intrinsic unsafe
1854core::core_arch::arm_shared::neon::generatedvld4q_f32function* Neon intrinsic unsafe
1855core::core_arch::arm_shared::neon::generatedvld4q_lane_f16function* Neon intrinsic unsafe
1856core::core_arch::arm_shared::neon::generatedvld4q_lane_f32function* Neon intrinsic unsafe
1857core::core_arch::arm_shared::neon::generatedvld4q_lane_p16function* Neon intrinsic unsafe
1858core::core_arch::arm_shared::neon::generatedvld4q_lane_s16function* Neon intrinsic unsafe
1859core::core_arch::arm_shared::neon::generatedvld4q_lane_s32function* Neon intrinsic unsafe
1860core::core_arch::arm_shared::neon::generatedvld4q_lane_u16function* Neon intrinsic unsafe
1861core::core_arch::arm_shared::neon::generatedvld4q_lane_u32function* Neon intrinsic unsafe
1862core::core_arch::arm_shared::neon::generatedvld4q_p16function* Neon intrinsic unsafe
1863core::core_arch::arm_shared::neon::generatedvld4q_p8function* Neon intrinsic unsafe
1864core::core_arch::arm_shared::neon::generatedvld4q_s16function* Neon intrinsic unsafe
1865core::core_arch::arm_shared::neon::generatedvld4q_s32function* Neon intrinsic unsafe
1866core::core_arch::arm_shared::neon::generatedvld4q_s8function* Neon intrinsic unsafe
1867core::core_arch::arm_shared::neon::generatedvld4q_u16function* Neon intrinsic unsafe
1868core::core_arch::arm_shared::neon::generatedvld4q_u32function* Neon intrinsic unsafe
1869core::core_arch::arm_shared::neon::generatedvld4q_u8function* Neon intrinsic unsafe
1870core::core_arch::arm_shared::neon::generatedvldrq_p128function* Neon intrinsic unsafe
1871core::core_arch::arm_shared::neon::generatedvst1_f16_x2function* Neon intrinsic unsafe
1872core::core_arch::arm_shared::neon::generatedvst1_f16_x3function* Neon intrinsic unsafe
1873core::core_arch::arm_shared::neon::generatedvst1_f16_x4function* Neon intrinsic unsafe
1874core::core_arch::arm_shared::neon::generatedvst1_f32_x2function* Neon intrinsic unsafe
1875core::core_arch::arm_shared::neon::generatedvst1_f32_x3function* Neon intrinsic unsafe
1876core::core_arch::arm_shared::neon::generatedvst1_f32_x4function* Neon intrinsic unsafe
1877core::core_arch::arm_shared::neon::generatedvst1_lane_f16function* Neon intrinsic unsafe
1878core::core_arch::arm_shared::neon::generatedvst1_lane_f32function* Neon intrinsic unsafe
1879core::core_arch::arm_shared::neon::generatedvst1_lane_p16function* Neon intrinsic unsafe
1880core::core_arch::arm_shared::neon::generatedvst1_lane_p64function* Neon intrinsic unsafe
1881core::core_arch::arm_shared::neon::generatedvst1_lane_p8function* Neon intrinsic unsafe
1882core::core_arch::arm_shared::neon::generatedvst1_lane_s16function* Neon intrinsic unsafe
1883core::core_arch::arm_shared::neon::generatedvst1_lane_s32function* Neon intrinsic unsafe
1884core::core_arch::arm_shared::neon::generatedvst1_lane_s64function* Neon intrinsic unsafe
1885core::core_arch::arm_shared::neon::generatedvst1_lane_s8function* Neon intrinsic unsafe
1886core::core_arch::arm_shared::neon::generatedvst1_lane_u16function* Neon intrinsic unsafe
1887core::core_arch::arm_shared::neon::generatedvst1_lane_u32function* Neon intrinsic unsafe
1888core::core_arch::arm_shared::neon::generatedvst1_lane_u64function* Neon intrinsic unsafe
1889core::core_arch::arm_shared::neon::generatedvst1_lane_u8function* Neon intrinsic unsafe
1890core::core_arch::arm_shared::neon::generatedvst1_p16_x2function* Neon intrinsic unsafe
1891core::core_arch::arm_shared::neon::generatedvst1_p16_x3function* Neon intrinsic unsafe
1892core::core_arch::arm_shared::neon::generatedvst1_p16_x4function* Neon intrinsic unsafe
1893core::core_arch::arm_shared::neon::generatedvst1_p64_x2function* Neon intrinsic unsafe
1894core::core_arch::arm_shared::neon::generatedvst1_p64_x3function* Neon intrinsic unsafe
1895core::core_arch::arm_shared::neon::generatedvst1_p64_x4function* Neon intrinsic unsafe
1896core::core_arch::arm_shared::neon::generatedvst1_p8_x2function* Neon intrinsic unsafe
1897core::core_arch::arm_shared::neon::generatedvst1_p8_x3function* Neon intrinsic unsafe
1898core::core_arch::arm_shared::neon::generatedvst1_p8_x4function* Neon intrinsic unsafe
1899core::core_arch::arm_shared::neon::generatedvst1_s16_x2function* Neon intrinsic unsafe
1900core::core_arch::arm_shared::neon::generatedvst1_s16_x3function* Neon intrinsic unsafe
1901core::core_arch::arm_shared::neon::generatedvst1_s16_x4function* Neon intrinsic unsafe
1902core::core_arch::arm_shared::neon::generatedvst1_s32_x2function* Neon intrinsic unsafe
1903core::core_arch::arm_shared::neon::generatedvst1_s32_x3function* Neon intrinsic unsafe
1904core::core_arch::arm_shared::neon::generatedvst1_s32_x4function* Neon intrinsic unsafe
1905core::core_arch::arm_shared::neon::generatedvst1_s64_x2function* Neon intrinsic unsafe
1906core::core_arch::arm_shared::neon::generatedvst1_s64_x3function* Neon intrinsic unsafe
1907core::core_arch::arm_shared::neon::generatedvst1_s64_x4function* Neon intrinsic unsafe
1908core::core_arch::arm_shared::neon::generatedvst1_s8_x2function* Neon intrinsic unsafe
1909core::core_arch::arm_shared::neon::generatedvst1_s8_x3function* Neon intrinsic unsafe
1910core::core_arch::arm_shared::neon::generatedvst1_s8_x4function* Neon intrinsic unsafe
1911core::core_arch::arm_shared::neon::generatedvst1_u16_x2function* Neon intrinsic unsafe
1912core::core_arch::arm_shared::neon::generatedvst1_u16_x3function* Neon intrinsic unsafe
1913core::core_arch::arm_shared::neon::generatedvst1_u16_x4function* Neon intrinsic unsafe
1914core::core_arch::arm_shared::neon::generatedvst1_u32_x2function* Neon intrinsic unsafe
1915core::core_arch::arm_shared::neon::generatedvst1_u32_x3function* Neon intrinsic unsafe
1916core::core_arch::arm_shared::neon::generatedvst1_u32_x4function* Neon intrinsic unsafe
1917core::core_arch::arm_shared::neon::generatedvst1_u64_x2function* Neon intrinsic unsafe
1918core::core_arch::arm_shared::neon::generatedvst1_u64_x3function* Neon intrinsic unsafe
1919core::core_arch::arm_shared::neon::generatedvst1_u64_x4function* Neon intrinsic unsafe
1920core::core_arch::arm_shared::neon::generatedvst1_u8_x2function* Neon intrinsic unsafe
1921core::core_arch::arm_shared::neon::generatedvst1_u8_x3function* Neon intrinsic unsafe
1922core::core_arch::arm_shared::neon::generatedvst1_u8_x4function* Neon intrinsic unsafe
1923core::core_arch::arm_shared::neon::generatedvst1q_f16_x2function* Neon intrinsic unsafe
1924core::core_arch::arm_shared::neon::generatedvst1q_f16_x3function* Neon intrinsic unsafe
1925core::core_arch::arm_shared::neon::generatedvst1q_f16_x4function* Neon intrinsic unsafe
1926core::core_arch::arm_shared::neon::generatedvst1q_f32_x2function* Neon intrinsic unsafe
1927core::core_arch::arm_shared::neon::generatedvst1q_f32_x3function* Neon intrinsic unsafe
1928core::core_arch::arm_shared::neon::generatedvst1q_f32_x4function* Neon intrinsic unsafe
1929core::core_arch::arm_shared::neon::generatedvst1q_lane_f16function* Neon intrinsic unsafe
1930core::core_arch::arm_shared::neon::generatedvst1q_lane_f32function* Neon intrinsic unsafe
1931core::core_arch::arm_shared::neon::generatedvst1q_lane_p16function* Neon intrinsic unsafe
1932core::core_arch::arm_shared::neon::generatedvst1q_lane_p64function* Neon intrinsic unsafe
1933core::core_arch::arm_shared::neon::generatedvst1q_lane_p8function* Neon intrinsic unsafe
1934core::core_arch::arm_shared::neon::generatedvst1q_lane_s16function* Neon intrinsic unsafe
1935core::core_arch::arm_shared::neon::generatedvst1q_lane_s32function* Neon intrinsic unsafe
1936core::core_arch::arm_shared::neon::generatedvst1q_lane_s64function* Neon intrinsic unsafe
1937core::core_arch::arm_shared::neon::generatedvst1q_lane_s8function* Neon intrinsic unsafe
1938core::core_arch::arm_shared::neon::generatedvst1q_lane_u16function* Neon intrinsic unsafe
1939core::core_arch::arm_shared::neon::generatedvst1q_lane_u32function* Neon intrinsic unsafe
1940core::core_arch::arm_shared::neon::generatedvst1q_lane_u64function* Neon intrinsic unsafe
1941core::core_arch::arm_shared::neon::generatedvst1q_lane_u8function* Neon intrinsic unsafe
1942core::core_arch::arm_shared::neon::generatedvst1q_p16_x2function* Neon intrinsic unsafe
1943core::core_arch::arm_shared::neon::generatedvst1q_p16_x3function* Neon intrinsic unsafe
1944core::core_arch::arm_shared::neon::generatedvst1q_p16_x4function* Neon intrinsic unsafe
1945core::core_arch::arm_shared::neon::generatedvst1q_p64_x2function* Neon intrinsic unsafe
1946core::core_arch::arm_shared::neon::generatedvst1q_p64_x3function* Neon intrinsic unsafe
1947core::core_arch::arm_shared::neon::generatedvst1q_p64_x4function* Neon intrinsic unsafe
1948core::core_arch::arm_shared::neon::generatedvst1q_p8_x2function* Neon intrinsic unsafe
1949core::core_arch::arm_shared::neon::generatedvst1q_p8_x3function* Neon intrinsic unsafe
1950core::core_arch::arm_shared::neon::generatedvst1q_p8_x4function* Neon intrinsic unsafe
1951core::core_arch::arm_shared::neon::generatedvst1q_s16_x2function* Neon intrinsic unsafe
1952core::core_arch::arm_shared::neon::generatedvst1q_s16_x3function* Neon intrinsic unsafe
1953core::core_arch::arm_shared::neon::generatedvst1q_s16_x4function* Neon intrinsic unsafe
1954core::core_arch::arm_shared::neon::generatedvst1q_s32_x2function* Neon intrinsic unsafe
1955core::core_arch::arm_shared::neon::generatedvst1q_s32_x3function* Neon intrinsic unsafe
1956core::core_arch::arm_shared::neon::generatedvst1q_s32_x4function* Neon intrinsic unsafe
1957core::core_arch::arm_shared::neon::generatedvst1q_s64_x2function* Neon intrinsic unsafe
1958core::core_arch::arm_shared::neon::generatedvst1q_s64_x3function* Neon intrinsic unsafe
1959core::core_arch::arm_shared::neon::generatedvst1q_s64_x4function* Neon intrinsic unsafe
1960core::core_arch::arm_shared::neon::generatedvst1q_s8_x2function* Neon intrinsic unsafe
1961core::core_arch::arm_shared::neon::generatedvst1q_s8_x3function* Neon intrinsic unsafe
1962core::core_arch::arm_shared::neon::generatedvst1q_s8_x4function* Neon intrinsic unsafe
1963core::core_arch::arm_shared::neon::generatedvst1q_u16_x2function* Neon intrinsic unsafe
1964core::core_arch::arm_shared::neon::generatedvst1q_u16_x3function* Neon intrinsic unsafe
1965core::core_arch::arm_shared::neon::generatedvst1q_u16_x4function* Neon intrinsic unsafe
1966core::core_arch::arm_shared::neon::generatedvst1q_u32_x2function* Neon intrinsic unsafe
1967core::core_arch::arm_shared::neon::generatedvst1q_u32_x3function* Neon intrinsic unsafe
1968core::core_arch::arm_shared::neon::generatedvst1q_u32_x4function* Neon intrinsic unsafe
1969core::core_arch::arm_shared::neon::generatedvst1q_u64_x2function* Neon intrinsic unsafe
1970core::core_arch::arm_shared::neon::generatedvst1q_u64_x3function* Neon intrinsic unsafe
1971core::core_arch::arm_shared::neon::generatedvst1q_u64_x4function* Neon intrinsic unsafe
1972core::core_arch::arm_shared::neon::generatedvst1q_u8_x2function* Neon intrinsic unsafe
1973core::core_arch::arm_shared::neon::generatedvst1q_u8_x3function* Neon intrinsic unsafe
1974core::core_arch::arm_shared::neon::generatedvst1q_u8_x4function* Neon intrinsic unsafe
1975core::core_arch::arm_shared::neon::generatedvst2_f16function* Neon intrinsic unsafe
1976core::core_arch::arm_shared::neon::generatedvst2_f32function* Neon intrinsic unsafe
1977core::core_arch::arm_shared::neon::generatedvst2_lane_f16function* Neon intrinsic unsafe
1978core::core_arch::arm_shared::neon::generatedvst2_lane_f32function* Neon intrinsic unsafe
1979core::core_arch::arm_shared::neon::generatedvst2_lane_p16function* Neon intrinsic unsafe
1980core::core_arch::arm_shared::neon::generatedvst2_lane_p8function* Neon intrinsic unsafe
1981core::core_arch::arm_shared::neon::generatedvst2_lane_s16function* Neon intrinsic unsafe
1982core::core_arch::arm_shared::neon::generatedvst2_lane_s32function* Neon intrinsic unsafe
1983core::core_arch::arm_shared::neon::generatedvst2_lane_s8function* Neon intrinsic unsafe
1984core::core_arch::arm_shared::neon::generatedvst2_lane_u16function* Neon intrinsic unsafe
1985core::core_arch::arm_shared::neon::generatedvst2_lane_u32function* Neon intrinsic unsafe
1986core::core_arch::arm_shared::neon::generatedvst2_lane_u8function* Neon intrinsic unsafe
1987core::core_arch::arm_shared::neon::generatedvst2_p16function* Neon intrinsic unsafe
1988core::core_arch::arm_shared::neon::generatedvst2_p64function* Neon intrinsic unsafe
1989core::core_arch::arm_shared::neon::generatedvst2_p8function* Neon intrinsic unsafe
1990core::core_arch::arm_shared::neon::generatedvst2_s16function* Neon intrinsic unsafe
1991core::core_arch::arm_shared::neon::generatedvst2_s32function* Neon intrinsic unsafe
1992core::core_arch::arm_shared::neon::generatedvst2_s64function* Neon intrinsic unsafe
1993core::core_arch::arm_shared::neon::generatedvst2_s8function* Neon intrinsic unsafe
1994core::core_arch::arm_shared::neon::generatedvst2_u16function* Neon intrinsic unsafe
1995core::core_arch::arm_shared::neon::generatedvst2_u32function* Neon intrinsic unsafe
1996core::core_arch::arm_shared::neon::generatedvst2_u64function* Neon intrinsic unsafe
1997core::core_arch::arm_shared::neon::generatedvst2_u8function* Neon intrinsic unsafe
1998core::core_arch::arm_shared::neon::generatedvst2q_f16function* Neon intrinsic unsafe
1999core::core_arch::arm_shared::neon::generatedvst2q_f32function* Neon intrinsic unsafe
2000core::core_arch::arm_shared::neon::generatedvst2q_lane_f16function* Neon intrinsic unsafe
2001core::core_arch::arm_shared::neon::generatedvst2q_lane_f32function* Neon intrinsic unsafe
2002core::core_arch::arm_shared::neon::generatedvst2q_lane_p16function* Neon intrinsic unsafe
2003core::core_arch::arm_shared::neon::generatedvst2q_lane_s16function* Neon intrinsic unsafe
2004core::core_arch::arm_shared::neon::generatedvst2q_lane_s32function* Neon intrinsic unsafe
2005core::core_arch::arm_shared::neon::generatedvst2q_lane_u16function* Neon intrinsic unsafe
2006core::core_arch::arm_shared::neon::generatedvst2q_lane_u32function* Neon intrinsic unsafe
2007core::core_arch::arm_shared::neon::generatedvst2q_p16function* Neon intrinsic unsafe
2008core::core_arch::arm_shared::neon::generatedvst2q_p8function* Neon intrinsic unsafe
2009core::core_arch::arm_shared::neon::generatedvst2q_s16function* Neon intrinsic unsafe
2010core::core_arch::arm_shared::neon::generatedvst2q_s32function* Neon intrinsic unsafe
2011core::core_arch::arm_shared::neon::generatedvst2q_s8function* Neon intrinsic unsafe
2012core::core_arch::arm_shared::neon::generatedvst2q_u16function* Neon intrinsic unsafe
2013core::core_arch::arm_shared::neon::generatedvst2q_u32function* Neon intrinsic unsafe
2014core::core_arch::arm_shared::neon::generatedvst2q_u8function* Neon intrinsic unsafe
2015core::core_arch::arm_shared::neon::generatedvst3_f16function* Neon intrinsic unsafe
2016core::core_arch::arm_shared::neon::generatedvst3_f32function* Neon intrinsic unsafe
2017core::core_arch::arm_shared::neon::generatedvst3_lane_f16function* Neon intrinsic unsafe
2018core::core_arch::arm_shared::neon::generatedvst3_lane_f32function* Neon intrinsic unsafe
2019core::core_arch::arm_shared::neon::generatedvst3_lane_p16function* Neon intrinsic unsafe
2020core::core_arch::arm_shared::neon::generatedvst3_lane_p8function* Neon intrinsic unsafe
2021core::core_arch::arm_shared::neon::generatedvst3_lane_s16function* Neon intrinsic unsafe
2022core::core_arch::arm_shared::neon::generatedvst3_lane_s32function* Neon intrinsic unsafe
2023core::core_arch::arm_shared::neon::generatedvst3_lane_s8function* Neon intrinsic unsafe
2024core::core_arch::arm_shared::neon::generatedvst3_lane_u16function* Neon intrinsic unsafe
2025core::core_arch::arm_shared::neon::generatedvst3_lane_u32function* Neon intrinsic unsafe
2026core::core_arch::arm_shared::neon::generatedvst3_lane_u8function* Neon intrinsic unsafe
2027core::core_arch::arm_shared::neon::generatedvst3_p16function* Neon intrinsic unsafe
2028core::core_arch::arm_shared::neon::generatedvst3_p64function* Neon intrinsic unsafe
2029core::core_arch::arm_shared::neon::generatedvst3_p8function* Neon intrinsic unsafe
2030core::core_arch::arm_shared::neon::generatedvst3_s16function* Neon intrinsic unsafe
2031core::core_arch::arm_shared::neon::generatedvst3_s32function* Neon intrinsic unsafe
2032core::core_arch::arm_shared::neon::generatedvst3_s64function* Neon intrinsic unsafe
2033core::core_arch::arm_shared::neon::generatedvst3_s8function* Neon intrinsic unsafe
2034core::core_arch::arm_shared::neon::generatedvst3_u16function* Neon intrinsic unsafe
2035core::core_arch::arm_shared::neon::generatedvst3_u32function* Neon intrinsic unsafe
2036core::core_arch::arm_shared::neon::generatedvst3_u64function* Neon intrinsic unsafe
2037core::core_arch::arm_shared::neon::generatedvst3_u8function* Neon intrinsic unsafe
2038core::core_arch::arm_shared::neon::generatedvst3q_f16function* Neon intrinsic unsafe
2039core::core_arch::arm_shared::neon::generatedvst3q_f32function* Neon intrinsic unsafe
2040core::core_arch::arm_shared::neon::generatedvst3q_lane_f16function* Neon intrinsic unsafe
2041core::core_arch::arm_shared::neon::generatedvst3q_lane_f32function* Neon intrinsic unsafe
2042core::core_arch::arm_shared::neon::generatedvst3q_lane_p16function* Neon intrinsic unsafe
2043core::core_arch::arm_shared::neon::generatedvst3q_lane_s16function* Neon intrinsic unsafe
2044core::core_arch::arm_shared::neon::generatedvst3q_lane_s32function* Neon intrinsic unsafe
2045core::core_arch::arm_shared::neon::generatedvst3q_lane_u16function* Neon intrinsic unsafe
2046core::core_arch::arm_shared::neon::generatedvst3q_lane_u32function* Neon intrinsic unsafe
2047core::core_arch::arm_shared::neon::generatedvst3q_p16function* Neon intrinsic unsafe
2048core::core_arch::arm_shared::neon::generatedvst3q_p8function* Neon intrinsic unsafe
2049core::core_arch::arm_shared::neon::generatedvst3q_s16function* Neon intrinsic unsafe
2050core::core_arch::arm_shared::neon::generatedvst3q_s32function* Neon intrinsic unsafe
2051core::core_arch::arm_shared::neon::generatedvst3q_s8function* Neon intrinsic unsafe
2052core::core_arch::arm_shared::neon::generatedvst3q_u16function* Neon intrinsic unsafe
2053core::core_arch::arm_shared::neon::generatedvst3q_u32function* Neon intrinsic unsafe
2054core::core_arch::arm_shared::neon::generatedvst3q_u8function* Neon intrinsic unsafe
2055core::core_arch::arm_shared::neon::generatedvst4_f16function* Neon intrinsic unsafe
2056core::core_arch::arm_shared::neon::generatedvst4_f32function* Neon intrinsic unsafe
2057core::core_arch::arm_shared::neon::generatedvst4_lane_f16function* Neon intrinsic unsafe
2058core::core_arch::arm_shared::neon::generatedvst4_lane_f32function* Neon intrinsic unsafe
2059core::core_arch::arm_shared::neon::generatedvst4_lane_p16function* Neon intrinsic unsafe
2060core::core_arch::arm_shared::neon::generatedvst4_lane_p8function* Neon intrinsic unsafe
2061core::core_arch::arm_shared::neon::generatedvst4_lane_s16function* Neon intrinsic unsafe
2062core::core_arch::arm_shared::neon::generatedvst4_lane_s32function* Neon intrinsic unsafe
2063core::core_arch::arm_shared::neon::generatedvst4_lane_s8function* Neon intrinsic unsafe
2064core::core_arch::arm_shared::neon::generatedvst4_lane_u16function* Neon intrinsic unsafe
2065core::core_arch::arm_shared::neon::generatedvst4_lane_u32function* Neon intrinsic unsafe
2066core::core_arch::arm_shared::neon::generatedvst4_lane_u8function* Neon intrinsic unsafe
2067core::core_arch::arm_shared::neon::generatedvst4_p16function* Neon intrinsic unsafe
2068core::core_arch::arm_shared::neon::generatedvst4_p64function* Neon intrinsic unsafe
2069core::core_arch::arm_shared::neon::generatedvst4_p8function* Neon intrinsic unsafe
2070core::core_arch::arm_shared::neon::generatedvst4_s16function* Neon intrinsic unsafe
2071core::core_arch::arm_shared::neon::generatedvst4_s32function* Neon intrinsic unsafe
2072core::core_arch::arm_shared::neon::generatedvst4_s64function* Neon intrinsic unsafe
2073core::core_arch::arm_shared::neon::generatedvst4_s8function* Neon intrinsic unsafe
2074core::core_arch::arm_shared::neon::generatedvst4_u16function* Neon intrinsic unsafe
2075core::core_arch::arm_shared::neon::generatedvst4_u32function* Neon intrinsic unsafe
2076core::core_arch::arm_shared::neon::generatedvst4_u64function* Neon intrinsic unsafe
2077core::core_arch::arm_shared::neon::generatedvst4_u8function* Neon intrinsic unsafe
2078core::core_arch::arm_shared::neon::generatedvst4q_f16function* Neon intrinsic unsafe
2079core::core_arch::arm_shared::neon::generatedvst4q_f32function* Neon intrinsic unsafe
2080core::core_arch::arm_shared::neon::generatedvst4q_lane_f16function* Neon intrinsic unsafe
2081core::core_arch::arm_shared::neon::generatedvst4q_lane_f32function* Neon intrinsic unsafe
2082core::core_arch::arm_shared::neon::generatedvst4q_lane_p16function* Neon intrinsic unsafe
2083core::core_arch::arm_shared::neon::generatedvst4q_lane_s16function* Neon intrinsic unsafe
2084core::core_arch::arm_shared::neon::generatedvst4q_lane_s32function* Neon intrinsic unsafe
2085core::core_arch::arm_shared::neon::generatedvst4q_lane_u16function* Neon intrinsic unsafe
2086core::core_arch::arm_shared::neon::generatedvst4q_lane_u32function* Neon intrinsic unsafe
2087core::core_arch::arm_shared::neon::generatedvst4q_p16function* Neon intrinsic unsafe
2088core::core_arch::arm_shared::neon::generatedvst4q_p8function* Neon intrinsic unsafe
2089core::core_arch::arm_shared::neon::generatedvst4q_s16function* Neon intrinsic unsafe
2090core::core_arch::arm_shared::neon::generatedvst4q_s32function* Neon intrinsic unsafe
2091core::core_arch::arm_shared::neon::generatedvst4q_s8function* Neon intrinsic unsafe
2092core::core_arch::arm_shared::neon::generatedvst4q_u16function* Neon intrinsic unsafe
2093core::core_arch::arm_shared::neon::generatedvst4q_u32function* Neon intrinsic unsafe
2094core::core_arch::arm_shared::neon::generatedvst4q_u8function* Neon intrinsic unsafe
2095core::core_arch::arm_shared::neon::generatedvstrq_p128function* Neon intrinsic unsafe
2096core::core_arch::hexagon::v128Q6_Q_and_QQfunction
2097core::core_arch::hexagon::v128Q6_Q_and_QQnfunction
2098core::core_arch::hexagon::v128Q6_Q_not_Qfunction
2099core::core_arch::hexagon::v128Q6_Q_or_QQfunction
2100core::core_arch::hexagon::v128Q6_Q_or_QQnfunction
2101core::core_arch::hexagon::v128Q6_Q_vand_VRfunction
2102core::core_arch::hexagon::v128Q6_Q_vandor_QVRfunction
2103core::core_arch::hexagon::v128Q6_Q_vcmp_eq_VbVbfunction
2104core::core_arch::hexagon::v128Q6_Q_vcmp_eq_VhVhfunction
2105core::core_arch::hexagon::v128Q6_Q_vcmp_eq_VwVwfunction
2106core::core_arch::hexagon::v128Q6_Q_vcmp_eqand_QVbVbfunction
2107core::core_arch::hexagon::v128Q6_Q_vcmp_eqand_QVhVhfunction
2108core::core_arch::hexagon::v128Q6_Q_vcmp_eqand_QVwVwfunction
2109core::core_arch::hexagon::v128Q6_Q_vcmp_eqor_QVbVbfunction
2110core::core_arch::hexagon::v128Q6_Q_vcmp_eqor_QVhVhfunction
2111core::core_arch::hexagon::v128Q6_Q_vcmp_eqor_QVwVwfunction
2112core::core_arch::hexagon::v128Q6_Q_vcmp_eqxacc_QVbVbfunction
2113core::core_arch::hexagon::v128Q6_Q_vcmp_eqxacc_QVhVhfunction
2114core::core_arch::hexagon::v128Q6_Q_vcmp_eqxacc_QVwVwfunction
2115core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VbVbfunction
2116core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VhVhfunction
2117core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VhfVhffunction
2118core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VsfVsffunction
2119core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VubVubfunction
2120core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VuhVuhfunction
2121core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VuwVuwfunction
2122core::core_arch::hexagon::v128Q6_Q_vcmp_gt_VwVwfunction
2123core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVbVbfunction
2124core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVhVhfunction
2125core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVhfVhffunction
2126core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVsfVsffunction
2127core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVubVubfunction
2128core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVuhVuhfunction
2129core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVuwVuwfunction
2130core::core_arch::hexagon::v128Q6_Q_vcmp_gtand_QVwVwfunction
2131core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVbVbfunction
2132core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVhVhfunction
2133core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVhfVhffunction
2134core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVsfVsffunction
2135core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVubVubfunction
2136core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVuhVuhfunction
2137core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVuwVuwfunction
2138core::core_arch::hexagon::v128Q6_Q_vcmp_gtor_QVwVwfunction
2139core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVbVbfunction
2140core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVhVhfunction
2141core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVhfVhffunction
2142core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVsfVsffunction
2143core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVubVubfunction
2144core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVuhVuhfunction
2145core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVuwVuwfunction
2146core::core_arch::hexagon::v128Q6_Q_vcmp_gtxacc_QVwVwfunction
2147core::core_arch::hexagon::v128Q6_Q_vsetq2_Rfunction
2148core::core_arch::hexagon::v128Q6_Q_vsetq_Rfunction
2149core::core_arch::hexagon::v128Q6_Q_xor_QQfunction
2150core::core_arch::hexagon::v128Q6_Qb_vshuffe_QhQhfunction
2151core::core_arch::hexagon::v128Q6_Qh_vshuffe_QwQwfunction
2152core::core_arch::hexagon::v128Q6_R_vextract_VRfunction
2153core::core_arch::hexagon::v128Q6_V_equals_Vfunction
2154core::core_arch::hexagon::v128Q6_V_hi_Wfunction
2155core::core_arch::hexagon::v128Q6_V_lo_Wfunction
2156core::core_arch::hexagon::v128Q6_V_vabs_Vfunction
2157core::core_arch::hexagon::v128Q6_V_valign_VVIfunction
2158core::core_arch::hexagon::v128Q6_V_valign_VVRfunction
2159core::core_arch::hexagon::v128Q6_V_vand_QRfunction
2160core::core_arch::hexagon::v128Q6_V_vand_QVfunction
2161core::core_arch::hexagon::v128Q6_V_vand_QnRfunction
2162core::core_arch::hexagon::v128Q6_V_vand_QnVfunction
2163core::core_arch::hexagon::v128Q6_V_vand_VVfunction
2164core::core_arch::hexagon::v128Q6_V_vandor_VQRfunction
2165core::core_arch::hexagon::v128Q6_V_vandor_VQnRfunction
2166core::core_arch::hexagon::v128Q6_V_vdelta_VVfunction
2167core::core_arch::hexagon::v128Q6_V_vfmax_VVfunction
2168core::core_arch::hexagon::v128Q6_V_vfmin_VVfunction
2169core::core_arch::hexagon::v128Q6_V_vfneg_Vfunction
2170core::core_arch::hexagon::v128Q6_V_vgetqfext_VRfunction
2171core::core_arch::hexagon::v128Q6_V_vlalign_VVIfunction
2172core::core_arch::hexagon::v128Q6_V_vlalign_VVRfunction
2173core::core_arch::hexagon::v128Q6_V_vmux_QVVfunction
2174core::core_arch::hexagon::v128Q6_V_vnot_Vfunction
2175core::core_arch::hexagon::v128Q6_V_vor_VVfunction
2176core::core_arch::hexagon::v128Q6_V_vrdelta_VVfunction
2177core::core_arch::hexagon::v128Q6_V_vror_VRfunction
2178core::core_arch::hexagon::v128Q6_V_vsetqfext_VRfunction
2179core::core_arch::hexagon::v128Q6_V_vsplat_Rfunction
2180core::core_arch::hexagon::v128Q6_V_vxor_VVfunction
2181core::core_arch::hexagon::v128Q6_V_vzerofunction
2182core::core_arch::hexagon::v128Q6_Vb_condacc_QVbVbfunction
2183core::core_arch::hexagon::v128Q6_Vb_condacc_QnVbVbfunction
2184core::core_arch::hexagon::v128Q6_Vb_condnac_QVbVbfunction
2185core::core_arch::hexagon::v128Q6_Vb_condnac_QnVbVbfunction
2186core::core_arch::hexagon::v128Q6_Vb_prefixsum_Qfunction
2187core::core_arch::hexagon::v128Q6_Vb_vabs_Vbfunction
2188core::core_arch::hexagon::v128Q6_Vb_vabs_Vb_satfunction
2189core::core_arch::hexagon::v128Q6_Vb_vadd_VbVbfunction
2190core::core_arch::hexagon::v128Q6_Vb_vadd_VbVb_satfunction
2191core::core_arch::hexagon::v128Q6_Vb_vasr_VhVhR_rnd_satfunction
2192core::core_arch::hexagon::v128Q6_Vb_vasr_VhVhR_satfunction
2193core::core_arch::hexagon::v128Q6_Vb_vavg_VbVbfunction
2194core::core_arch::hexagon::v128Q6_Vb_vavg_VbVb_rndfunction
2195core::core_arch::hexagon::v128Q6_Vb_vcvt_VhfVhffunction
2196core::core_arch::hexagon::v128Q6_Vb_vdeal_Vbfunction
2197core::core_arch::hexagon::v128Q6_Vb_vdeale_VbVbfunction
2198core::core_arch::hexagon::v128Q6_Vb_vlut32_VbVbIfunction
2199core::core_arch::hexagon::v128Q6_Vb_vlut32_VbVbRfunction
2200core::core_arch::hexagon::v128Q6_Vb_vlut32_VbVbR_nomatchfunction
2201core::core_arch::hexagon::v128Q6_Vb_vlut32or_VbVbVbIfunction
2202core::core_arch::hexagon::v128Q6_Vb_vlut32or_VbVbVbRfunction
2203core::core_arch::hexagon::v128Q6_Vb_vmax_VbVbfunction
2204core::core_arch::hexagon::v128Q6_Vb_vmin_VbVbfunction
2205core::core_arch::hexagon::v128Q6_Vb_vnavg_VbVbfunction
2206core::core_arch::hexagon::v128Q6_Vb_vnavg_VubVubfunction
2207core::core_arch::hexagon::v128Q6_Vb_vpack_VhVh_satfunction
2208core::core_arch::hexagon::v128Q6_Vb_vpacke_VhVhfunction
2209core::core_arch::hexagon::v128Q6_Vb_vpacko_VhVhfunction
2210core::core_arch::hexagon::v128Q6_Vb_vround_VhVh_satfunction
2211core::core_arch::hexagon::v128Q6_Vb_vshuff_Vbfunction
2212core::core_arch::hexagon::v128Q6_Vb_vshuffe_VbVbfunction
2213core::core_arch::hexagon::v128Q6_Vb_vshuffo_VbVbfunction
2214core::core_arch::hexagon::v128Q6_Vb_vsplat_Rfunction
2215core::core_arch::hexagon::v128Q6_Vb_vsub_VbVbfunction
2216core::core_arch::hexagon::v128Q6_Vb_vsub_VbVb_satfunction
2217core::core_arch::hexagon::v128Q6_Vh_condacc_QVhVhfunction
2218core::core_arch::hexagon::v128Q6_Vh_condacc_QnVhVhfunction
2219core::core_arch::hexagon::v128Q6_Vh_condnac_QVhVhfunction
2220core::core_arch::hexagon::v128Q6_Vh_condnac_QnVhVhfunction
2221core::core_arch::hexagon::v128Q6_Vh_equals_Vhffunction
2222core::core_arch::hexagon::v128Q6_Vh_prefixsum_Qfunction
2223core::core_arch::hexagon::v128Q6_Vh_vabs_Vhfunction
2224core::core_arch::hexagon::v128Q6_Vh_vabs_Vh_satfunction
2225core::core_arch::hexagon::v128Q6_Vh_vadd_VhVhfunction
2226core::core_arch::hexagon::v128Q6_Vh_vadd_VhVh_satfunction
2227core::core_arch::hexagon::v128Q6_Vh_vadd_vclb_VhVhfunction
2228core::core_arch::hexagon::v128Q6_Vh_vasl_VhRfunction
2229core::core_arch::hexagon::v128Q6_Vh_vasl_VhVhfunction
2230core::core_arch::hexagon::v128Q6_Vh_vaslacc_VhVhRfunction
2231core::core_arch::hexagon::v128Q6_Vh_vasr_VhRfunction
2232core::core_arch::hexagon::v128Q6_Vh_vasr_VhVhfunction
2233core::core_arch::hexagon::v128Q6_Vh_vasr_VwVwRfunction
2234core::core_arch::hexagon::v128Q6_Vh_vasr_VwVwR_rnd_satfunction
2235core::core_arch::hexagon::v128Q6_Vh_vasr_VwVwR_satfunction
2236core::core_arch::hexagon::v128Q6_Vh_vasracc_VhVhRfunction
2237core::core_arch::hexagon::v128Q6_Vh_vavg_VhVhfunction
2238core::core_arch::hexagon::v128Q6_Vh_vavg_VhVh_rndfunction
2239core::core_arch::hexagon::v128Q6_Vh_vcvt_Vhffunction
2240core::core_arch::hexagon::v128Q6_Vh_vdeal_Vhfunction
2241core::core_arch::hexagon::v128Q6_Vh_vdmpy_VubRbfunction
2242core::core_arch::hexagon::v128Q6_Vh_vdmpyacc_VhVubRbfunction
2243core::core_arch::hexagon::v128Q6_Vh_vlsr_VhVhfunction
2244core::core_arch::hexagon::v128Q6_Vh_vmax_VhVhfunction
2245core::core_arch::hexagon::v128Q6_Vh_vmin_VhVhfunction
2246core::core_arch::hexagon::v128Q6_Vh_vmpy_VhRh_s1_rnd_satfunction
2247core::core_arch::hexagon::v128Q6_Vh_vmpy_VhRh_s1_satfunction
2248core::core_arch::hexagon::v128Q6_Vh_vmpy_VhVh_s1_rnd_satfunction
2249core::core_arch::hexagon::v128Q6_Vh_vmpyi_VhRbfunction
2250core::core_arch::hexagon::v128Q6_Vh_vmpyi_VhVhfunction
2251core::core_arch::hexagon::v128Q6_Vh_vmpyiacc_VhVhRbfunction
2252core::core_arch::hexagon::v128Q6_Vh_vmpyiacc_VhVhVhfunction
2253core::core_arch::hexagon::v128Q6_Vh_vnavg_VhVhfunction
2254core::core_arch::hexagon::v128Q6_Vh_vnormamt_Vhfunction
2255core::core_arch::hexagon::v128Q6_Vh_vpack_VwVw_satfunction
2256core::core_arch::hexagon::v128Q6_Vh_vpacke_VwVwfunction
2257core::core_arch::hexagon::v128Q6_Vh_vpacko_VwVwfunction
2258core::core_arch::hexagon::v128Q6_Vh_vpopcount_Vhfunction
2259core::core_arch::hexagon::v128Q6_Vh_vround_VwVw_satfunction
2260core::core_arch::hexagon::v128Q6_Vh_vsat_VwVwfunction
2261core::core_arch::hexagon::v128Q6_Vh_vshuff_Vhfunction
2262core::core_arch::hexagon::v128Q6_Vh_vshuffe_VhVhfunction
2263core::core_arch::hexagon::v128Q6_Vh_vshuffo_VhVhfunction
2264core::core_arch::hexagon::v128Q6_Vh_vsplat_Rfunction
2265core::core_arch::hexagon::v128Q6_Vh_vsub_VhVhfunction
2266core::core_arch::hexagon::v128Q6_Vh_vsub_VhVh_satfunction
2267core::core_arch::hexagon::v128Q6_Vhf_equals_Vhfunction
2268core::core_arch::hexagon::v128Q6_Vhf_equals_Vqf16function
2269core::core_arch::hexagon::v128Q6_Vhf_equals_Wqf32function
2270core::core_arch::hexagon::v128Q6_Vhf_vabs_Vhffunction
2271core::core_arch::hexagon::v128Q6_Vhf_vadd_VhfVhffunction
2272core::core_arch::hexagon::v128Q6_Vhf_vcvt_Vhfunction
2273core::core_arch::hexagon::v128Q6_Vhf_vcvt_VsfVsffunction
2274core::core_arch::hexagon::v128Q6_Vhf_vcvt_Vuhfunction
2275core::core_arch::hexagon::v128Q6_Vhf_vfmax_VhfVhffunction
2276core::core_arch::hexagon::v128Q6_Vhf_vfmin_VhfVhffunction
2277core::core_arch::hexagon::v128Q6_Vhf_vfneg_Vhffunction
2278core::core_arch::hexagon::v128Q6_Vhf_vmax_VhfVhffunction
2279core::core_arch::hexagon::v128Q6_Vhf_vmin_VhfVhffunction
2280core::core_arch::hexagon::v128Q6_Vhf_vmpy_VhfVhffunction
2281core::core_arch::hexagon::v128Q6_Vhf_vmpyacc_VhfVhfVhffunction
2282core::core_arch::hexagon::v128Q6_Vhf_vsub_VhfVhffunction
2283core::core_arch::hexagon::v128Q6_Vqf16_vadd_VhfVhffunction
2284core::core_arch::hexagon::v128Q6_Vqf16_vadd_Vqf16Vhffunction
2285core::core_arch::hexagon::v128Q6_Vqf16_vadd_Vqf16Vqf16function
2286core::core_arch::hexagon::v128Q6_Vqf16_vmpy_VhfVhffunction
2287core::core_arch::hexagon::v128Q6_Vqf16_vmpy_Vqf16Vhffunction
2288core::core_arch::hexagon::v128Q6_Vqf16_vmpy_Vqf16Vqf16function
2289core::core_arch::hexagon::v128Q6_Vqf16_vsub_VhfVhffunction
2290core::core_arch::hexagon::v128Q6_Vqf16_vsub_Vqf16Vhffunction
2291core::core_arch::hexagon::v128Q6_Vqf16_vsub_Vqf16Vqf16function
2292core::core_arch::hexagon::v128Q6_Vqf32_vadd_Vqf32Vqf32function
2293core::core_arch::hexagon::v128Q6_Vqf32_vadd_Vqf32Vsffunction
2294core::core_arch::hexagon::v128Q6_Vqf32_vadd_VsfVsffunction
2295core::core_arch::hexagon::v128Q6_Vqf32_vmpy_Vqf32Vqf32function
2296core::core_arch::hexagon::v128Q6_Vqf32_vmpy_VsfVsffunction
2297core::core_arch::hexagon::v128Q6_Vqf32_vsub_Vqf32Vqf32function
2298core::core_arch::hexagon::v128Q6_Vqf32_vsub_Vqf32Vsffunction
2299core::core_arch::hexagon::v128Q6_Vqf32_vsub_VsfVsffunction
2300core::core_arch::hexagon::v128Q6_Vsf_equals_Vqf32function
2301core::core_arch::hexagon::v128Q6_Vsf_equals_Vwfunction
2302core::core_arch::hexagon::v128Q6_Vsf_vabs_Vsffunction
2303core::core_arch::hexagon::v128Q6_Vsf_vadd_VsfVsffunction
2304core::core_arch::hexagon::v128Q6_Vsf_vdmpy_VhfVhffunction
2305core::core_arch::hexagon::v128Q6_Vsf_vdmpyacc_VsfVhfVhffunction
2306core::core_arch::hexagon::v128Q6_Vsf_vfmax_VsfVsffunction
2307core::core_arch::hexagon::v128Q6_Vsf_vfmin_VsfVsffunction
2308core::core_arch::hexagon::v128Q6_Vsf_vfneg_Vsffunction
2309core::core_arch::hexagon::v128Q6_Vsf_vmax_VsfVsffunction
2310core::core_arch::hexagon::v128Q6_Vsf_vmin_VsfVsffunction
2311core::core_arch::hexagon::v128Q6_Vsf_vmpy_VsfVsffunction
2312core::core_arch::hexagon::v128Q6_Vsf_vsub_VsfVsffunction
2313core::core_arch::hexagon::v128Q6_Vub_vabsdiff_VubVubfunction
2314core::core_arch::hexagon::v128Q6_Vub_vadd_VubVb_satfunction
2315core::core_arch::hexagon::v128Q6_Vub_vadd_VubVub_satfunction
2316core::core_arch::hexagon::v128Q6_Vub_vasr_VhVhR_rnd_satfunction
2317core::core_arch::hexagon::v128Q6_Vub_vasr_VhVhR_satfunction
2318core::core_arch::hexagon::v128Q6_Vub_vasr_VuhVuhR_rnd_satfunction
2319core::core_arch::hexagon::v128Q6_Vub_vasr_VuhVuhR_satfunction
2320core::core_arch::hexagon::v128Q6_Vub_vasr_WuhVub_rnd_satfunction
2321core::core_arch::hexagon::v128Q6_Vub_vasr_WuhVub_satfunction
2322core::core_arch::hexagon::v128Q6_Vub_vavg_VubVubfunction
2323core::core_arch::hexagon::v128Q6_Vub_vavg_VubVub_rndfunction
2324core::core_arch::hexagon::v128Q6_Vub_vcvt_VhfVhffunction
2325core::core_arch::hexagon::v128Q6_Vub_vlsr_VubRfunction
2326core::core_arch::hexagon::v128Q6_Vub_vmax_VubVubfunction
2327core::core_arch::hexagon::v128Q6_Vub_vmin_VubVubfunction
2328core::core_arch::hexagon::v128Q6_Vub_vpack_VhVh_satfunction
2329core::core_arch::hexagon::v128Q6_Vub_vround_VhVh_satfunction
2330core::core_arch::hexagon::v128Q6_Vub_vround_VuhVuh_satfunction
2331core::core_arch::hexagon::v128Q6_Vub_vsat_VhVhfunction
2332core::core_arch::hexagon::v128Q6_Vub_vsub_VubVb_satfunction
2333core::core_arch::hexagon::v128Q6_Vub_vsub_VubVub_satfunction
2334core::core_arch::hexagon::v128Q6_Vuh_vabsdiff_VhVhfunction
2335core::core_arch::hexagon::v128Q6_Vuh_vabsdiff_VuhVuhfunction
2336core::core_arch::hexagon::v128Q6_Vuh_vadd_VuhVuh_satfunction
2337core::core_arch::hexagon::v128Q6_Vuh_vasr_VuwVuwR_rnd_satfunction
2338core::core_arch::hexagon::v128Q6_Vuh_vasr_VuwVuwR_satfunction
2339core::core_arch::hexagon::v128Q6_Vuh_vasr_VwVwR_rnd_satfunction
2340core::core_arch::hexagon::v128Q6_Vuh_vasr_VwVwR_satfunction
2341core::core_arch::hexagon::v128Q6_Vuh_vasr_WwVuh_rnd_satfunction
2342core::core_arch::hexagon::v128Q6_Vuh_vasr_WwVuh_satfunction
2343core::core_arch::hexagon::v128Q6_Vuh_vavg_VuhVuhfunction
2344core::core_arch::hexagon::v128Q6_Vuh_vavg_VuhVuh_rndfunction
2345core::core_arch::hexagon::v128Q6_Vuh_vcl0_Vuhfunction
2346core::core_arch::hexagon::v128Q6_Vuh_vcvt_Vhffunction
2347core::core_arch::hexagon::v128Q6_Vuh_vlsr_VuhRfunction
2348core::core_arch::hexagon::v128Q6_Vuh_vmax_VuhVuhfunction
2349core::core_arch::hexagon::v128Q6_Vuh_vmin_VuhVuhfunction
2350core::core_arch::hexagon::v128Q6_Vuh_vmpy_VuhVuh_rs16function
2351core::core_arch::hexagon::v128Q6_Vuh_vpack_VwVw_satfunction
2352core::core_arch::hexagon::v128Q6_Vuh_vround_VuwVuw_satfunction
2353core::core_arch::hexagon::v128Q6_Vuh_vround_VwVw_satfunction
2354core::core_arch::hexagon::v128Q6_Vuh_vsat_VuwVuwfunction
2355core::core_arch::hexagon::v128Q6_Vuh_vsub_VuhVuh_satfunction
2356core::core_arch::hexagon::v128Q6_Vuw_vabsdiff_VwVwfunction
2357core::core_arch::hexagon::v128Q6_Vuw_vadd_VuwVuw_satfunction
2358core::core_arch::hexagon::v128Q6_Vuw_vavg_VuwVuwfunction
2359core::core_arch::hexagon::v128Q6_Vuw_vavg_VuwVuw_rndfunction
2360core::core_arch::hexagon::v128Q6_Vuw_vcl0_Vuwfunction
2361core::core_arch::hexagon::v128Q6_Vuw_vlsr_VuwRfunction
2362core::core_arch::hexagon::v128Q6_Vuw_vmpye_VuhRuhfunction
2363core::core_arch::hexagon::v128Q6_Vuw_vmpyeacc_VuwVuhRuhfunction
2364core::core_arch::hexagon::v128Q6_Vuw_vrmpy_VubRubfunction
2365core::core_arch::hexagon::v128Q6_Vuw_vrmpy_VubVubfunction
2366core::core_arch::hexagon::v128Q6_Vuw_vrmpyacc_VuwVubRubfunction
2367core::core_arch::hexagon::v128Q6_Vuw_vrmpyacc_VuwVubVubfunction
2368core::core_arch::hexagon::v128Q6_Vuw_vrotr_VuwVuwfunction
2369core::core_arch::hexagon::v128Q6_Vuw_vsub_VuwVuw_satfunction
2370core::core_arch::hexagon::v128Q6_Vw_condacc_QVwVwfunction
2371core::core_arch::hexagon::v128Q6_Vw_condacc_QnVwVwfunction
2372core::core_arch::hexagon::v128Q6_Vw_condnac_QVwVwfunction
2373core::core_arch::hexagon::v128Q6_Vw_condnac_QnVwVwfunction
2374core::core_arch::hexagon::v128Q6_Vw_equals_Vsffunction
2375core::core_arch::hexagon::v128Q6_Vw_prefixsum_Qfunction
2376core::core_arch::hexagon::v128Q6_Vw_vabs_Vwfunction
2377core::core_arch::hexagon::v128Q6_Vw_vabs_Vw_satfunction
2378core::core_arch::hexagon::v128Q6_Vw_vadd_VwVwfunction
2379core::core_arch::hexagon::v128Q6_Vw_vadd_VwVwQ_carry_satfunction
2380core::core_arch::hexagon::v128Q6_Vw_vadd_VwVw_satfunction
2381core::core_arch::hexagon::v128Q6_Vw_vadd_vclb_VwVwfunction
2382core::core_arch::hexagon::v128Q6_Vw_vasl_VwRfunction
2383core::core_arch::hexagon::v128Q6_Vw_vasl_VwVwfunction
2384core::core_arch::hexagon::v128Q6_Vw_vaslacc_VwVwRfunction
2385core::core_arch::hexagon::v128Q6_Vw_vasr_VwRfunction
2386core::core_arch::hexagon::v128Q6_Vw_vasr_VwVwfunction
2387core::core_arch::hexagon::v128Q6_Vw_vasracc_VwVwRfunction
2388core::core_arch::hexagon::v128Q6_Vw_vavg_VwVwfunction
2389core::core_arch::hexagon::v128Q6_Vw_vavg_VwVw_rndfunction
2390core::core_arch::hexagon::v128Q6_Vw_vdmpy_VhRbfunction
2391core::core_arch::hexagon::v128Q6_Vw_vdmpy_VhRh_satfunction
2392core::core_arch::hexagon::v128Q6_Vw_vdmpy_VhRuh_satfunction
2393core::core_arch::hexagon::v128Q6_Vw_vdmpy_VhVh_satfunction
2394core::core_arch::hexagon::v128Q6_Vw_vdmpy_WhRh_satfunction
2395core::core_arch::hexagon::v128Q6_Vw_vdmpy_WhRuh_satfunction
2396core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwVhRbfunction
2397core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwVhRh_satfunction
2398core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwVhRuh_satfunction
2399core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwVhVh_satfunction
2400core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwWhRh_satfunction
2401core::core_arch::hexagon::v128Q6_Vw_vdmpyacc_VwWhRuh_satfunction
2402core::core_arch::hexagon::v128Q6_Vw_vfmv_Vwfunction
2403core::core_arch::hexagon::v128Q6_Vw_vinsert_VwRfunction
2404core::core_arch::hexagon::v128Q6_Vw_vlsr_VwVwfunction
2405core::core_arch::hexagon::v128Q6_Vw_vmax_VwVwfunction
2406core::core_arch::hexagon::v128Q6_Vw_vmin_VwVwfunction
2407core::core_arch::hexagon::v128Q6_Vw_vmpye_VwVuhfunction
2408core::core_arch::hexagon::v128Q6_Vw_vmpyi_VwRbfunction
2409core::core_arch::hexagon::v128Q6_Vw_vmpyi_VwRhfunction
2410core::core_arch::hexagon::v128Q6_Vw_vmpyi_VwRubfunction
2411core::core_arch::hexagon::v128Q6_Vw_vmpyiacc_VwVwRbfunction
2412core::core_arch::hexagon::v128Q6_Vw_vmpyiacc_VwVwRhfunction
2413core::core_arch::hexagon::v128Q6_Vw_vmpyiacc_VwVwRubfunction
2414core::core_arch::hexagon::v128Q6_Vw_vmpyie_VwVuhfunction
2415core::core_arch::hexagon::v128Q6_Vw_vmpyieacc_VwVwVhfunction
2416core::core_arch::hexagon::v128Q6_Vw_vmpyieacc_VwVwVuhfunction
2417core::core_arch::hexagon::v128Q6_Vw_vmpyieo_VhVhfunction
2418core::core_arch::hexagon::v128Q6_Vw_vmpyio_VwVhfunction
2419core::core_arch::hexagon::v128Q6_Vw_vmpyo_VwVh_s1_rnd_satfunction
2420core::core_arch::hexagon::v128Q6_Vw_vmpyo_VwVh_s1_satfunction
2421core::core_arch::hexagon::v128Q6_Vw_vmpyoacc_VwVwVh_s1_rnd_sat_shiftfunction
2422core::core_arch::hexagon::v128Q6_Vw_vmpyoacc_VwVwVh_s1_sat_shiftfunction
2423core::core_arch::hexagon::v128Q6_Vw_vnavg_VwVwfunction
2424core::core_arch::hexagon::v128Q6_Vw_vnormamt_Vwfunction
2425core::core_arch::hexagon::v128Q6_Vw_vrmpy_VbVbfunction
2426core::core_arch::hexagon::v128Q6_Vw_vrmpy_VubRbfunction
2427core::core_arch::hexagon::v128Q6_Vw_vrmpy_VubVbfunction
2428core::core_arch::hexagon::v128Q6_Vw_vrmpyacc_VwVbVbfunction
2429core::core_arch::hexagon::v128Q6_Vw_vrmpyacc_VwVubRbfunction
2430core::core_arch::hexagon::v128Q6_Vw_vrmpyacc_VwVubVbfunction
2431core::core_arch::hexagon::v128Q6_Vw_vsatdw_VwVwfunction
2432core::core_arch::hexagon::v128Q6_Vw_vsub_VwVwfunction
2433core::core_arch::hexagon::v128Q6_Vw_vsub_VwVw_satfunction
2434core::core_arch::hexagon::v128Q6_W_equals_Wfunction
2435core::core_arch::hexagon::v128Q6_W_vcombine_VVfunction
2436core::core_arch::hexagon::v128Q6_W_vdeal_VVRfunction
2437core::core_arch::hexagon::v128Q6_W_vmpye_VwVuhfunction
2438core::core_arch::hexagon::v128Q6_W_vmpyoacc_WVwVhfunction
2439core::core_arch::hexagon::v128Q6_W_vshuff_VVRfunction
2440core::core_arch::hexagon::v128Q6_W_vswap_QVVfunction
2441core::core_arch::hexagon::v128Q6_W_vzerofunction
2442core::core_arch::hexagon::v128Q6_Wb_vadd_WbWbfunction
2443core::core_arch::hexagon::v128Q6_Wb_vadd_WbWb_satfunction
2444core::core_arch::hexagon::v128Q6_Wb_vshuffoe_VbVbfunction
2445core::core_arch::hexagon::v128Q6_Wb_vsub_WbWbfunction
2446core::core_arch::hexagon::v128Q6_Wb_vsub_WbWb_satfunction
2447core::core_arch::hexagon::v128Q6_Wh_vadd_VubVubfunction
2448core::core_arch::hexagon::v128Q6_Wh_vadd_WhWhfunction
2449core::core_arch::hexagon::v128Q6_Wh_vadd_WhWh_satfunction
2450core::core_arch::hexagon::v128Q6_Wh_vaddacc_WhVubVubfunction
2451core::core_arch::hexagon::v128Q6_Wh_vdmpy_WubRbfunction
2452core::core_arch::hexagon::v128Q6_Wh_vdmpyacc_WhWubRbfunction
2453core::core_arch::hexagon::v128Q6_Wh_vlut16_VbVhIfunction
2454core::core_arch::hexagon::v128Q6_Wh_vlut16_VbVhRfunction
2455core::core_arch::hexagon::v128Q6_Wh_vlut16_VbVhR_nomatchfunction
2456core::core_arch::hexagon::v128Q6_Wh_vlut16or_WhVbVhIfunction
2457core::core_arch::hexagon::v128Q6_Wh_vlut16or_WhVbVhRfunction
2458core::core_arch::hexagon::v128Q6_Wh_vmpa_WubRbfunction
2459core::core_arch::hexagon::v128Q6_Wh_vmpa_WubRubfunction
2460core::core_arch::hexagon::v128Q6_Wh_vmpa_WubWbfunction
2461core::core_arch::hexagon::v128Q6_Wh_vmpa_WubWubfunction
2462core::core_arch::hexagon::v128Q6_Wh_vmpaacc_WhWubRbfunction
2463core::core_arch::hexagon::v128Q6_Wh_vmpaacc_WhWubRubfunction
2464core::core_arch::hexagon::v128Q6_Wh_vmpy_VbVbfunction
2465core::core_arch::hexagon::v128Q6_Wh_vmpy_VubRbfunction
2466core::core_arch::hexagon::v128Q6_Wh_vmpy_VubVbfunction
2467core::core_arch::hexagon::v128Q6_Wh_vmpyacc_WhVbVbfunction
2468core::core_arch::hexagon::v128Q6_Wh_vmpyacc_WhVubRbfunction
2469core::core_arch::hexagon::v128Q6_Wh_vmpyacc_WhVubVbfunction
2470core::core_arch::hexagon::v128Q6_Wh_vshuffoe_VhVhfunction
2471core::core_arch::hexagon::v128Q6_Wh_vsub_VubVubfunction
2472core::core_arch::hexagon::v128Q6_Wh_vsub_WhWhfunction
2473core::core_arch::hexagon::v128Q6_Wh_vsub_WhWh_satfunction
2474core::core_arch::hexagon::v128Q6_Wh_vsxt_Vbfunction
2475core::core_arch::hexagon::v128Q6_Wh_vtmpy_WbRbfunction
2476core::core_arch::hexagon::v128Q6_Wh_vtmpy_WubRbfunction
2477core::core_arch::hexagon::v128Q6_Wh_vtmpyacc_WhWbRbfunction
2478core::core_arch::hexagon::v128Q6_Wh_vtmpyacc_WhWubRbfunction
2479core::core_arch::hexagon::v128Q6_Wh_vunpack_Vbfunction
2480core::core_arch::hexagon::v128Q6_Wh_vunpackoor_WhVbfunction
2481core::core_arch::hexagon::v128Q6_Whf_vcvt2_Vbfunction
2482core::core_arch::hexagon::v128Q6_Whf_vcvt2_Vubfunction
2483core::core_arch::hexagon::v128Q6_Whf_vcvt_Vfunction
2484core::core_arch::hexagon::v128Q6_Whf_vcvt_Vbfunction
2485core::core_arch::hexagon::v128Q6_Whf_vcvt_Vubfunction
2486core::core_arch::hexagon::v128Q6_Wqf32_vmpy_VhfVhffunction
2487core::core_arch::hexagon::v128Q6_Wqf32_vmpy_Vqf16Vhffunction
2488core::core_arch::hexagon::v128Q6_Wqf32_vmpy_Vqf16Vqf16function
2489core::core_arch::hexagon::v128Q6_Wsf_vadd_VhfVhffunction
2490core::core_arch::hexagon::v128Q6_Wsf_vcvt_Vhffunction
2491core::core_arch::hexagon::v128Q6_Wsf_vmpy_VhfVhffunction
2492core::core_arch::hexagon::v128Q6_Wsf_vmpyacc_WsfVhfVhffunction
2493core::core_arch::hexagon::v128Q6_Wsf_vsub_VhfVhffunction
2494core::core_arch::hexagon::v128Q6_Wub_vadd_WubWub_satfunction
2495core::core_arch::hexagon::v128Q6_Wub_vsub_WubWub_satfunction
2496core::core_arch::hexagon::v128Q6_Wuh_vadd_WuhWuh_satfunction
2497core::core_arch::hexagon::v128Q6_Wuh_vmpy_VubRubfunction
2498core::core_arch::hexagon::v128Q6_Wuh_vmpy_VubVubfunction
2499core::core_arch::hexagon::v128Q6_Wuh_vmpyacc_WuhVubRubfunction
2500core::core_arch::hexagon::v128Q6_Wuh_vmpyacc_WuhVubVubfunction
2501core::core_arch::hexagon::v128Q6_Wuh_vsub_WuhWuh_satfunction
2502core::core_arch::hexagon::v128Q6_Wuh_vunpack_Vubfunction
2503core::core_arch::hexagon::v128Q6_Wuh_vzxt_Vubfunction
2504core::core_arch::hexagon::v128Q6_Wuw_vadd_WuwWuw_satfunction
2505core::core_arch::hexagon::v128Q6_Wuw_vdsad_WuhRuhfunction
2506core::core_arch::hexagon::v128Q6_Wuw_vdsadacc_WuwWuhRuhfunction
2507core::core_arch::hexagon::v128Q6_Wuw_vmpy_VuhRuhfunction
2508core::core_arch::hexagon::v128Q6_Wuw_vmpy_VuhVuhfunction
2509core::core_arch::hexagon::v128Q6_Wuw_vmpyacc_WuwVuhRuhfunction
2510core::core_arch::hexagon::v128Q6_Wuw_vmpyacc_WuwVuhVuhfunction
2511core::core_arch::hexagon::v128Q6_Wuw_vrmpy_WubRubIfunction
2512core::core_arch::hexagon::v128Q6_Wuw_vrmpyacc_WuwWubRubIfunction
2513core::core_arch::hexagon::v128Q6_Wuw_vrsad_WubRubIfunction
2514core::core_arch::hexagon::v128Q6_Wuw_vrsadacc_WuwWubRubIfunction
2515core::core_arch::hexagon::v128Q6_Wuw_vsub_WuwWuw_satfunction
2516core::core_arch::hexagon::v128Q6_Wuw_vunpack_Vuhfunction
2517core::core_arch::hexagon::v128Q6_Wuw_vzxt_Vuhfunction
2518core::core_arch::hexagon::v128Q6_Ww_v6mpy_WubWbI_hfunction
2519core::core_arch::hexagon::v128Q6_Ww_v6mpy_WubWbI_vfunction
2520core::core_arch::hexagon::v128Q6_Ww_v6mpyacc_WwWubWbI_hfunction
2521core::core_arch::hexagon::v128Q6_Ww_v6mpyacc_WwWubWbI_vfunction
2522core::core_arch::hexagon::v128Q6_Ww_vadd_VhVhfunction
2523core::core_arch::hexagon::v128Q6_Ww_vadd_VuhVuhfunction
2524core::core_arch::hexagon::v128Q6_Ww_vadd_WwWwfunction
2525core::core_arch::hexagon::v128Q6_Ww_vadd_WwWw_satfunction
2526core::core_arch::hexagon::v128Q6_Ww_vaddacc_WwVhVhfunction
2527core::core_arch::hexagon::v128Q6_Ww_vaddacc_WwVuhVuhfunction
2528core::core_arch::hexagon::v128Q6_Ww_vasrinto_WwVwVwfunction
2529core::core_arch::hexagon::v128Q6_Ww_vdmpy_WhRbfunction
2530core::core_arch::hexagon::v128Q6_Ww_vdmpyacc_WwWhRbfunction
2531core::core_arch::hexagon::v128Q6_Ww_vmpa_WhRbfunction
2532core::core_arch::hexagon::v128Q6_Ww_vmpa_WuhRbfunction
2533core::core_arch::hexagon::v128Q6_Ww_vmpaacc_WwWhRbfunction
2534core::core_arch::hexagon::v128Q6_Ww_vmpaacc_WwWuhRbfunction
2535core::core_arch::hexagon::v128Q6_Ww_vmpy_VhRhfunction
2536core::core_arch::hexagon::v128Q6_Ww_vmpy_VhVhfunction
2537core::core_arch::hexagon::v128Q6_Ww_vmpy_VhVuhfunction
2538core::core_arch::hexagon::v128Q6_Ww_vmpyacc_WwVhRhfunction
2539core::core_arch::hexagon::v128Q6_Ww_vmpyacc_WwVhRh_satfunction
2540core::core_arch::hexagon::v128Q6_Ww_vmpyacc_WwVhVhfunction
2541core::core_arch::hexagon::v128Q6_Ww_vmpyacc_WwVhVuhfunction
2542core::core_arch::hexagon::v128Q6_Ww_vrmpy_WubRbIfunction
2543core::core_arch::hexagon::v128Q6_Ww_vrmpyacc_WwWubRbIfunction
2544core::core_arch::hexagon::v128Q6_Ww_vsub_VhVhfunction
2545core::core_arch::hexagon::v128Q6_Ww_vsub_VuhVuhfunction
2546core::core_arch::hexagon::v128Q6_Ww_vsub_WwWwfunction
2547core::core_arch::hexagon::v128Q6_Ww_vsub_WwWw_satfunction
2548core::core_arch::hexagon::v128Q6_Ww_vsxt_Vhfunction
2549core::core_arch::hexagon::v128Q6_Ww_vtmpy_WhRbfunction
2550core::core_arch::hexagon::v128Q6_Ww_vtmpyacc_WwWhRbfunction
2551core::core_arch::hexagon::v128Q6_Ww_vunpack_Vhfunction
2552core::core_arch::hexagon::v128Q6_Ww_vunpackoor_WwVhfunction
2553core::core_arch::hexagon::v128Q6_vgather_AQRMVhfunction
2554core::core_arch::hexagon::v128Q6_vgather_AQRMVwfunction
2555core::core_arch::hexagon::v128Q6_vgather_AQRMWwfunction
2556core::core_arch::hexagon::v128Q6_vgather_ARMVhfunction
2557core::core_arch::hexagon::v128Q6_vgather_ARMVwfunction
2558core::core_arch::hexagon::v128Q6_vgather_ARMWwfunction
2559core::core_arch::hexagon::v128Q6_vmem_QRIVfunction
2560core::core_arch::hexagon::v128Q6_vmem_QRIV_ntfunction
2561core::core_arch::hexagon::v128Q6_vmem_QnRIVfunction
2562core::core_arch::hexagon::v128Q6_vmem_QnRIV_ntfunction
2563core::core_arch::hexagon::v128Q6_vscatter_QRMVhVfunction
2564core::core_arch::hexagon::v128Q6_vscatter_QRMVwVfunction
2565core::core_arch::hexagon::v128Q6_vscatter_QRMWwVfunction
2566core::core_arch::hexagon::v128Q6_vscatter_RMVhVfunction
2567core::core_arch::hexagon::v128Q6_vscatter_RMVwVfunction
2568core::core_arch::hexagon::v128Q6_vscatter_RMWwVfunction
2569core::core_arch::hexagon::v128Q6_vscatteracc_RMVhVfunction
2570core::core_arch::hexagon::v128Q6_vscatteracc_RMVwVfunction
2571core::core_arch::hexagon::v128Q6_vscatteracc_RMWwVfunction
2572core::core_arch::hexagon::v64Q6_Q_and_QQfunction
2573core::core_arch::hexagon::v64Q6_Q_and_QQnfunction
2574core::core_arch::hexagon::v64Q6_Q_not_Qfunction
2575core::core_arch::hexagon::v64Q6_Q_or_QQfunction
2576core::core_arch::hexagon::v64Q6_Q_or_QQnfunction
2577core::core_arch::hexagon::v64Q6_Q_vand_VRfunction
2578core::core_arch::hexagon::v64Q6_Q_vandor_QVRfunction
2579core::core_arch::hexagon::v64Q6_Q_vcmp_eq_VbVbfunction
2580core::core_arch::hexagon::v64Q6_Q_vcmp_eq_VhVhfunction
2581core::core_arch::hexagon::v64Q6_Q_vcmp_eq_VwVwfunction
2582core::core_arch::hexagon::v64Q6_Q_vcmp_eqand_QVbVbfunction
2583core::core_arch::hexagon::v64Q6_Q_vcmp_eqand_QVhVhfunction
2584core::core_arch::hexagon::v64Q6_Q_vcmp_eqand_QVwVwfunction
2585core::core_arch::hexagon::v64Q6_Q_vcmp_eqor_QVbVbfunction
2586core::core_arch::hexagon::v64Q6_Q_vcmp_eqor_QVhVhfunction
2587core::core_arch::hexagon::v64Q6_Q_vcmp_eqor_QVwVwfunction
2588core::core_arch::hexagon::v64Q6_Q_vcmp_eqxacc_QVbVbfunction
2589core::core_arch::hexagon::v64Q6_Q_vcmp_eqxacc_QVhVhfunction
2590core::core_arch::hexagon::v64Q6_Q_vcmp_eqxacc_QVwVwfunction
2591core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VbVbfunction
2592core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VhVhfunction
2593core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VhfVhffunction
2594core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VsfVsffunction
2595core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VubVubfunction
2596core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VuhVuhfunction
2597core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VuwVuwfunction
2598core::core_arch::hexagon::v64Q6_Q_vcmp_gt_VwVwfunction
2599core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVbVbfunction
2600core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVhVhfunction
2601core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVhfVhffunction
2602core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVsfVsffunction
2603core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVubVubfunction
2604core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVuhVuhfunction
2605core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVuwVuwfunction
2606core::core_arch::hexagon::v64Q6_Q_vcmp_gtand_QVwVwfunction
2607core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVbVbfunction
2608core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVhVhfunction
2609core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVhfVhffunction
2610core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVsfVsffunction
2611core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVubVubfunction
2612core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVuhVuhfunction
2613core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVuwVuwfunction
2614core::core_arch::hexagon::v64Q6_Q_vcmp_gtor_QVwVwfunction
2615core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVbVbfunction
2616core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVhVhfunction
2617core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVhfVhffunction
2618core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVsfVsffunction
2619core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVubVubfunction
2620core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVuhVuhfunction
2621core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVuwVuwfunction
2622core::core_arch::hexagon::v64Q6_Q_vcmp_gtxacc_QVwVwfunction
2623core::core_arch::hexagon::v64Q6_Q_vsetq2_Rfunction
2624core::core_arch::hexagon::v64Q6_Q_vsetq_Rfunction
2625core::core_arch::hexagon::v64Q6_Q_xor_QQfunction
2626core::core_arch::hexagon::v64Q6_Qb_vshuffe_QhQhfunction
2627core::core_arch::hexagon::v64Q6_Qh_vshuffe_QwQwfunction
2628core::core_arch::hexagon::v64Q6_R_vextract_VRfunction
2629core::core_arch::hexagon::v64Q6_V_equals_Vfunction
2630core::core_arch::hexagon::v64Q6_V_hi_Wfunction
2631core::core_arch::hexagon::v64Q6_V_lo_Wfunction
2632core::core_arch::hexagon::v64Q6_V_vabs_Vfunction
2633core::core_arch::hexagon::v64Q6_V_valign_VVIfunction
2634core::core_arch::hexagon::v64Q6_V_valign_VVRfunction
2635core::core_arch::hexagon::v64Q6_V_vand_QRfunction
2636core::core_arch::hexagon::v64Q6_V_vand_QVfunction
2637core::core_arch::hexagon::v64Q6_V_vand_QnRfunction
2638core::core_arch::hexagon::v64Q6_V_vand_QnVfunction
2639core::core_arch::hexagon::v64Q6_V_vand_VVfunction
2640core::core_arch::hexagon::v64Q6_V_vandor_VQRfunction
2641core::core_arch::hexagon::v64Q6_V_vandor_VQnRfunction
2642core::core_arch::hexagon::v64Q6_V_vdelta_VVfunction
2643core::core_arch::hexagon::v64Q6_V_vfmax_VVfunction
2644core::core_arch::hexagon::v64Q6_V_vfmin_VVfunction
2645core::core_arch::hexagon::v64Q6_V_vfneg_Vfunction
2646core::core_arch::hexagon::v64Q6_V_vgetqfext_VRfunction
2647core::core_arch::hexagon::v64Q6_V_vlalign_VVIfunction
2648core::core_arch::hexagon::v64Q6_V_vlalign_VVRfunction
2649core::core_arch::hexagon::v64Q6_V_vmux_QVVfunction
2650core::core_arch::hexagon::v64Q6_V_vnot_Vfunction
2651core::core_arch::hexagon::v64Q6_V_vor_VVfunction
2652core::core_arch::hexagon::v64Q6_V_vrdelta_VVfunction
2653core::core_arch::hexagon::v64Q6_V_vror_VRfunction
2654core::core_arch::hexagon::v64Q6_V_vsetqfext_VRfunction
2655core::core_arch::hexagon::v64Q6_V_vsplat_Rfunction
2656core::core_arch::hexagon::v64Q6_V_vxor_VVfunction
2657core::core_arch::hexagon::v64Q6_V_vzerofunction
2658core::core_arch::hexagon::v64Q6_Vb_condacc_QVbVbfunction
2659core::core_arch::hexagon::v64Q6_Vb_condacc_QnVbVbfunction
2660core::core_arch::hexagon::v64Q6_Vb_condnac_QVbVbfunction
2661core::core_arch::hexagon::v64Q6_Vb_condnac_QnVbVbfunction
2662core::core_arch::hexagon::v64Q6_Vb_prefixsum_Qfunction
2663core::core_arch::hexagon::v64Q6_Vb_vabs_Vbfunction
2664core::core_arch::hexagon::v64Q6_Vb_vabs_Vb_satfunction
2665core::core_arch::hexagon::v64Q6_Vb_vadd_VbVbfunction
2666core::core_arch::hexagon::v64Q6_Vb_vadd_VbVb_satfunction
2667core::core_arch::hexagon::v64Q6_Vb_vasr_VhVhR_rnd_satfunction
2668core::core_arch::hexagon::v64Q6_Vb_vasr_VhVhR_satfunction
2669core::core_arch::hexagon::v64Q6_Vb_vavg_VbVbfunction
2670core::core_arch::hexagon::v64Q6_Vb_vavg_VbVb_rndfunction
2671core::core_arch::hexagon::v64Q6_Vb_vcvt_VhfVhffunction
2672core::core_arch::hexagon::v64Q6_Vb_vdeal_Vbfunction
2673core::core_arch::hexagon::v64Q6_Vb_vdeale_VbVbfunction
2674core::core_arch::hexagon::v64Q6_Vb_vlut32_VbVbIfunction
2675core::core_arch::hexagon::v64Q6_Vb_vlut32_VbVbRfunction
2676core::core_arch::hexagon::v64Q6_Vb_vlut32_VbVbR_nomatchfunction
2677core::core_arch::hexagon::v64Q6_Vb_vlut32or_VbVbVbIfunction
2678core::core_arch::hexagon::v64Q6_Vb_vlut32or_VbVbVbRfunction
2679core::core_arch::hexagon::v64Q6_Vb_vmax_VbVbfunction
2680core::core_arch::hexagon::v64Q6_Vb_vmin_VbVbfunction
2681core::core_arch::hexagon::v64Q6_Vb_vnavg_VbVbfunction
2682core::core_arch::hexagon::v64Q6_Vb_vnavg_VubVubfunction
2683core::core_arch::hexagon::v64Q6_Vb_vpack_VhVh_satfunction
2684core::core_arch::hexagon::v64Q6_Vb_vpacke_VhVhfunction
2685core::core_arch::hexagon::v64Q6_Vb_vpacko_VhVhfunction
2686core::core_arch::hexagon::v64Q6_Vb_vround_VhVh_satfunction
2687core::core_arch::hexagon::v64Q6_Vb_vshuff_Vbfunction
2688core::core_arch::hexagon::v64Q6_Vb_vshuffe_VbVbfunction
2689core::core_arch::hexagon::v64Q6_Vb_vshuffo_VbVbfunction
2690core::core_arch::hexagon::v64Q6_Vb_vsplat_Rfunction
2691core::core_arch::hexagon::v64Q6_Vb_vsub_VbVbfunction
2692core::core_arch::hexagon::v64Q6_Vb_vsub_VbVb_satfunction
2693core::core_arch::hexagon::v64Q6_Vh_condacc_QVhVhfunction
2694core::core_arch::hexagon::v64Q6_Vh_condacc_QnVhVhfunction
2695core::core_arch::hexagon::v64Q6_Vh_condnac_QVhVhfunction
2696core::core_arch::hexagon::v64Q6_Vh_condnac_QnVhVhfunction
2697core::core_arch::hexagon::v64Q6_Vh_equals_Vhffunction
2698core::core_arch::hexagon::v64Q6_Vh_prefixsum_Qfunction
2699core::core_arch::hexagon::v64Q6_Vh_vabs_Vhfunction
2700core::core_arch::hexagon::v64Q6_Vh_vabs_Vh_satfunction
2701core::core_arch::hexagon::v64Q6_Vh_vadd_VhVhfunction
2702core::core_arch::hexagon::v64Q6_Vh_vadd_VhVh_satfunction
2703core::core_arch::hexagon::v64Q6_Vh_vadd_vclb_VhVhfunction
2704core::core_arch::hexagon::v64Q6_Vh_vasl_VhRfunction
2705core::core_arch::hexagon::v64Q6_Vh_vasl_VhVhfunction
2706core::core_arch::hexagon::v64Q6_Vh_vaslacc_VhVhRfunction
2707core::core_arch::hexagon::v64Q6_Vh_vasr_VhRfunction
2708core::core_arch::hexagon::v64Q6_Vh_vasr_VhVhfunction
2709core::core_arch::hexagon::v64Q6_Vh_vasr_VwVwRfunction
2710core::core_arch::hexagon::v64Q6_Vh_vasr_VwVwR_rnd_satfunction
2711core::core_arch::hexagon::v64Q6_Vh_vasr_VwVwR_satfunction
2712core::core_arch::hexagon::v64Q6_Vh_vasracc_VhVhRfunction
2713core::core_arch::hexagon::v64Q6_Vh_vavg_VhVhfunction
2714core::core_arch::hexagon::v64Q6_Vh_vavg_VhVh_rndfunction
2715core::core_arch::hexagon::v64Q6_Vh_vcvt_Vhffunction
2716core::core_arch::hexagon::v64Q6_Vh_vdeal_Vhfunction
2717core::core_arch::hexagon::v64Q6_Vh_vdmpy_VubRbfunction
2718core::core_arch::hexagon::v64Q6_Vh_vdmpyacc_VhVubRbfunction
2719core::core_arch::hexagon::v64Q6_Vh_vlsr_VhVhfunction
2720core::core_arch::hexagon::v64Q6_Vh_vmax_VhVhfunction
2721core::core_arch::hexagon::v64Q6_Vh_vmin_VhVhfunction
2722core::core_arch::hexagon::v64Q6_Vh_vmpy_VhRh_s1_rnd_satfunction
2723core::core_arch::hexagon::v64Q6_Vh_vmpy_VhRh_s1_satfunction
2724core::core_arch::hexagon::v64Q6_Vh_vmpy_VhVh_s1_rnd_satfunction
2725core::core_arch::hexagon::v64Q6_Vh_vmpyi_VhRbfunction
2726core::core_arch::hexagon::v64Q6_Vh_vmpyi_VhVhfunction
2727core::core_arch::hexagon::v64Q6_Vh_vmpyiacc_VhVhRbfunction
2728core::core_arch::hexagon::v64Q6_Vh_vmpyiacc_VhVhVhfunction
2729core::core_arch::hexagon::v64Q6_Vh_vnavg_VhVhfunction
2730core::core_arch::hexagon::v64Q6_Vh_vnormamt_Vhfunction
2731core::core_arch::hexagon::v64Q6_Vh_vpack_VwVw_satfunction
2732core::core_arch::hexagon::v64Q6_Vh_vpacke_VwVwfunction
2733core::core_arch::hexagon::v64Q6_Vh_vpacko_VwVwfunction
2734core::core_arch::hexagon::v64Q6_Vh_vpopcount_Vhfunction
2735core::core_arch::hexagon::v64Q6_Vh_vround_VwVw_satfunction
2736core::core_arch::hexagon::v64Q6_Vh_vsat_VwVwfunction
2737core::core_arch::hexagon::v64Q6_Vh_vshuff_Vhfunction
2738core::core_arch::hexagon::v64Q6_Vh_vshuffe_VhVhfunction
2739core::core_arch::hexagon::v64Q6_Vh_vshuffo_VhVhfunction
2740core::core_arch::hexagon::v64Q6_Vh_vsplat_Rfunction
2741core::core_arch::hexagon::v64Q6_Vh_vsub_VhVhfunction
2742core::core_arch::hexagon::v64Q6_Vh_vsub_VhVh_satfunction
2743core::core_arch::hexagon::v64Q6_Vhf_equals_Vhfunction
2744core::core_arch::hexagon::v64Q6_Vhf_equals_Vqf16function
2745core::core_arch::hexagon::v64Q6_Vhf_equals_Wqf32function
2746core::core_arch::hexagon::v64Q6_Vhf_vabs_Vhffunction
2747core::core_arch::hexagon::v64Q6_Vhf_vadd_VhfVhffunction
2748core::core_arch::hexagon::v64Q6_Vhf_vcvt_Vhfunction
2749core::core_arch::hexagon::v64Q6_Vhf_vcvt_VsfVsffunction
2750core::core_arch::hexagon::v64Q6_Vhf_vcvt_Vuhfunction
2751core::core_arch::hexagon::v64Q6_Vhf_vfmax_VhfVhffunction
2752core::core_arch::hexagon::v64Q6_Vhf_vfmin_VhfVhffunction
2753core::core_arch::hexagon::v64Q6_Vhf_vfneg_Vhffunction
2754core::core_arch::hexagon::v64Q6_Vhf_vmax_VhfVhffunction
2755core::core_arch::hexagon::v64Q6_Vhf_vmin_VhfVhffunction
2756core::core_arch::hexagon::v64Q6_Vhf_vmpy_VhfVhffunction
2757core::core_arch::hexagon::v64Q6_Vhf_vmpyacc_VhfVhfVhffunction
2758core::core_arch::hexagon::v64Q6_Vhf_vsub_VhfVhffunction
2759core::core_arch::hexagon::v64Q6_Vqf16_vadd_VhfVhffunction
2760core::core_arch::hexagon::v64Q6_Vqf16_vadd_Vqf16Vhffunction
2761core::core_arch::hexagon::v64Q6_Vqf16_vadd_Vqf16Vqf16function
2762core::core_arch::hexagon::v64Q6_Vqf16_vmpy_VhfVhffunction
2763core::core_arch::hexagon::v64Q6_Vqf16_vmpy_Vqf16Vhffunction
2764core::core_arch::hexagon::v64Q6_Vqf16_vmpy_Vqf16Vqf16function
2765core::core_arch::hexagon::v64Q6_Vqf16_vsub_VhfVhffunction
2766core::core_arch::hexagon::v64Q6_Vqf16_vsub_Vqf16Vhffunction
2767core::core_arch::hexagon::v64Q6_Vqf16_vsub_Vqf16Vqf16function
2768core::core_arch::hexagon::v64Q6_Vqf32_vadd_Vqf32Vqf32function
2769core::core_arch::hexagon::v64Q6_Vqf32_vadd_Vqf32Vsffunction
2770core::core_arch::hexagon::v64Q6_Vqf32_vadd_VsfVsffunction
2771core::core_arch::hexagon::v64Q6_Vqf32_vmpy_Vqf32Vqf32function
2772core::core_arch::hexagon::v64Q6_Vqf32_vmpy_VsfVsffunction
2773core::core_arch::hexagon::v64Q6_Vqf32_vsub_Vqf32Vqf32function
2774core::core_arch::hexagon::v64Q6_Vqf32_vsub_Vqf32Vsffunction
2775core::core_arch::hexagon::v64Q6_Vqf32_vsub_VsfVsffunction
2776core::core_arch::hexagon::v64Q6_Vsf_equals_Vqf32function
2777core::core_arch::hexagon::v64Q6_Vsf_equals_Vwfunction
2778core::core_arch::hexagon::v64Q6_Vsf_vabs_Vsffunction
2779core::core_arch::hexagon::v64Q6_Vsf_vadd_VsfVsffunction
2780core::core_arch::hexagon::v64Q6_Vsf_vdmpy_VhfVhffunction
2781core::core_arch::hexagon::v64Q6_Vsf_vdmpyacc_VsfVhfVhffunction
2782core::core_arch::hexagon::v64Q6_Vsf_vfmax_VsfVsffunction
2783core::core_arch::hexagon::v64Q6_Vsf_vfmin_VsfVsffunction
2784core::core_arch::hexagon::v64Q6_Vsf_vfneg_Vsffunction
2785core::core_arch::hexagon::v64Q6_Vsf_vmax_VsfVsffunction
2786core::core_arch::hexagon::v64Q6_Vsf_vmin_VsfVsffunction
2787core::core_arch::hexagon::v64Q6_Vsf_vmpy_VsfVsffunction
2788core::core_arch::hexagon::v64Q6_Vsf_vsub_VsfVsffunction
2789core::core_arch::hexagon::v64Q6_Vub_vabsdiff_VubVubfunction
2790core::core_arch::hexagon::v64Q6_Vub_vadd_VubVb_satfunction
2791core::core_arch::hexagon::v64Q6_Vub_vadd_VubVub_satfunction
2792core::core_arch::hexagon::v64Q6_Vub_vasr_VhVhR_rnd_satfunction
2793core::core_arch::hexagon::v64Q6_Vub_vasr_VhVhR_satfunction
2794core::core_arch::hexagon::v64Q6_Vub_vasr_VuhVuhR_rnd_satfunction
2795core::core_arch::hexagon::v64Q6_Vub_vasr_VuhVuhR_satfunction
2796core::core_arch::hexagon::v64Q6_Vub_vasr_WuhVub_rnd_satfunction
2797core::core_arch::hexagon::v64Q6_Vub_vasr_WuhVub_satfunction
2798core::core_arch::hexagon::v64Q6_Vub_vavg_VubVubfunction
2799core::core_arch::hexagon::v64Q6_Vub_vavg_VubVub_rndfunction
2800core::core_arch::hexagon::v64Q6_Vub_vcvt_VhfVhffunction
2801core::core_arch::hexagon::v64Q6_Vub_vlsr_VubRfunction
2802core::core_arch::hexagon::v64Q6_Vub_vmax_VubVubfunction
2803core::core_arch::hexagon::v64Q6_Vub_vmin_VubVubfunction
2804core::core_arch::hexagon::v64Q6_Vub_vpack_VhVh_satfunction
2805core::core_arch::hexagon::v64Q6_Vub_vround_VhVh_satfunction
2806core::core_arch::hexagon::v64Q6_Vub_vround_VuhVuh_satfunction
2807core::core_arch::hexagon::v64Q6_Vub_vsat_VhVhfunction
2808core::core_arch::hexagon::v64Q6_Vub_vsub_VubVb_satfunction
2809core::core_arch::hexagon::v64Q6_Vub_vsub_VubVub_satfunction
2810core::core_arch::hexagon::v64Q6_Vuh_vabsdiff_VhVhfunction
2811core::core_arch::hexagon::v64Q6_Vuh_vabsdiff_VuhVuhfunction
2812core::core_arch::hexagon::v64Q6_Vuh_vadd_VuhVuh_satfunction
2813core::core_arch::hexagon::v64Q6_Vuh_vasr_VuwVuwR_rnd_satfunction
2814core::core_arch::hexagon::v64Q6_Vuh_vasr_VuwVuwR_satfunction
2815core::core_arch::hexagon::v64Q6_Vuh_vasr_VwVwR_rnd_satfunction
2816core::core_arch::hexagon::v64Q6_Vuh_vasr_VwVwR_satfunction
2817core::core_arch::hexagon::v64Q6_Vuh_vasr_WwVuh_rnd_satfunction
2818core::core_arch::hexagon::v64Q6_Vuh_vasr_WwVuh_satfunction
2819core::core_arch::hexagon::v64Q6_Vuh_vavg_VuhVuhfunction
2820core::core_arch::hexagon::v64Q6_Vuh_vavg_VuhVuh_rndfunction
2821core::core_arch::hexagon::v64Q6_Vuh_vcl0_Vuhfunction
2822core::core_arch::hexagon::v64Q6_Vuh_vcvt_Vhffunction
2823core::core_arch::hexagon::v64Q6_Vuh_vlsr_VuhRfunction
2824core::core_arch::hexagon::v64Q6_Vuh_vmax_VuhVuhfunction
2825core::core_arch::hexagon::v64Q6_Vuh_vmin_VuhVuhfunction
2826core::core_arch::hexagon::v64Q6_Vuh_vmpy_VuhVuh_rs16function
2827core::core_arch::hexagon::v64Q6_Vuh_vpack_VwVw_satfunction
2828core::core_arch::hexagon::v64Q6_Vuh_vround_VuwVuw_satfunction
2829core::core_arch::hexagon::v64Q6_Vuh_vround_VwVw_satfunction
2830core::core_arch::hexagon::v64Q6_Vuh_vsat_VuwVuwfunction
2831core::core_arch::hexagon::v64Q6_Vuh_vsub_VuhVuh_satfunction
2832core::core_arch::hexagon::v64Q6_Vuw_vabsdiff_VwVwfunction
2833core::core_arch::hexagon::v64Q6_Vuw_vadd_VuwVuw_satfunction
2834core::core_arch::hexagon::v64Q6_Vuw_vavg_VuwVuwfunction
2835core::core_arch::hexagon::v64Q6_Vuw_vavg_VuwVuw_rndfunction
2836core::core_arch::hexagon::v64Q6_Vuw_vcl0_Vuwfunction
2837core::core_arch::hexagon::v64Q6_Vuw_vlsr_VuwRfunction
2838core::core_arch::hexagon::v64Q6_Vuw_vmpye_VuhRuhfunction
2839core::core_arch::hexagon::v64Q6_Vuw_vmpyeacc_VuwVuhRuhfunction
2840core::core_arch::hexagon::v64Q6_Vuw_vrmpy_VubRubfunction
2841core::core_arch::hexagon::v64Q6_Vuw_vrmpy_VubVubfunction
2842core::core_arch::hexagon::v64Q6_Vuw_vrmpyacc_VuwVubRubfunction
2843core::core_arch::hexagon::v64Q6_Vuw_vrmpyacc_VuwVubVubfunction
2844core::core_arch::hexagon::v64Q6_Vuw_vrotr_VuwVuwfunction
2845core::core_arch::hexagon::v64Q6_Vuw_vsub_VuwVuw_satfunction
2846core::core_arch::hexagon::v64Q6_Vw_condacc_QVwVwfunction
2847core::core_arch::hexagon::v64Q6_Vw_condacc_QnVwVwfunction
2848core::core_arch::hexagon::v64Q6_Vw_condnac_QVwVwfunction
2849core::core_arch::hexagon::v64Q6_Vw_condnac_QnVwVwfunction
2850core::core_arch::hexagon::v64Q6_Vw_equals_Vsffunction
2851core::core_arch::hexagon::v64Q6_Vw_prefixsum_Qfunction
2852core::core_arch::hexagon::v64Q6_Vw_vabs_Vwfunction
2853core::core_arch::hexagon::v64Q6_Vw_vabs_Vw_satfunction
2854core::core_arch::hexagon::v64Q6_Vw_vadd_VwVwfunction
2855core::core_arch::hexagon::v64Q6_Vw_vadd_VwVwQ_carry_satfunction
2856core::core_arch::hexagon::v64Q6_Vw_vadd_VwVw_satfunction
2857core::core_arch::hexagon::v64Q6_Vw_vadd_vclb_VwVwfunction
2858core::core_arch::hexagon::v64Q6_Vw_vasl_VwRfunction
2859core::core_arch::hexagon::v64Q6_Vw_vasl_VwVwfunction
2860core::core_arch::hexagon::v64Q6_Vw_vaslacc_VwVwRfunction
2861core::core_arch::hexagon::v64Q6_Vw_vasr_VwRfunction
2862core::core_arch::hexagon::v64Q6_Vw_vasr_VwVwfunction
2863core::core_arch::hexagon::v64Q6_Vw_vasracc_VwVwRfunction
2864core::core_arch::hexagon::v64Q6_Vw_vavg_VwVwfunction
2865core::core_arch::hexagon::v64Q6_Vw_vavg_VwVw_rndfunction
2866core::core_arch::hexagon::v64Q6_Vw_vdmpy_VhRbfunction
2867core::core_arch::hexagon::v64Q6_Vw_vdmpy_VhRh_satfunction
2868core::core_arch::hexagon::v64Q6_Vw_vdmpy_VhRuh_satfunction
2869core::core_arch::hexagon::v64Q6_Vw_vdmpy_VhVh_satfunction
2870core::core_arch::hexagon::v64Q6_Vw_vdmpy_WhRh_satfunction
2871core::core_arch::hexagon::v64Q6_Vw_vdmpy_WhRuh_satfunction
2872core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwVhRbfunction
2873core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwVhRh_satfunction
2874core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwVhRuh_satfunction
2875core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwVhVh_satfunction
2876core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwWhRh_satfunction
2877core::core_arch::hexagon::v64Q6_Vw_vdmpyacc_VwWhRuh_satfunction
2878core::core_arch::hexagon::v64Q6_Vw_vfmv_Vwfunction
2879core::core_arch::hexagon::v64Q6_Vw_vinsert_VwRfunction
2880core::core_arch::hexagon::v64Q6_Vw_vlsr_VwVwfunction
2881core::core_arch::hexagon::v64Q6_Vw_vmax_VwVwfunction
2882core::core_arch::hexagon::v64Q6_Vw_vmin_VwVwfunction
2883core::core_arch::hexagon::v64Q6_Vw_vmpye_VwVuhfunction
2884core::core_arch::hexagon::v64Q6_Vw_vmpyi_VwRbfunction
2885core::core_arch::hexagon::v64Q6_Vw_vmpyi_VwRhfunction
2886core::core_arch::hexagon::v64Q6_Vw_vmpyi_VwRubfunction
2887core::core_arch::hexagon::v64Q6_Vw_vmpyiacc_VwVwRbfunction
2888core::core_arch::hexagon::v64Q6_Vw_vmpyiacc_VwVwRhfunction
2889core::core_arch::hexagon::v64Q6_Vw_vmpyiacc_VwVwRubfunction
2890core::core_arch::hexagon::v64Q6_Vw_vmpyie_VwVuhfunction
2891core::core_arch::hexagon::v64Q6_Vw_vmpyieacc_VwVwVhfunction
2892core::core_arch::hexagon::v64Q6_Vw_vmpyieacc_VwVwVuhfunction
2893core::core_arch::hexagon::v64Q6_Vw_vmpyieo_VhVhfunction
2894core::core_arch::hexagon::v64Q6_Vw_vmpyio_VwVhfunction
2895core::core_arch::hexagon::v64Q6_Vw_vmpyo_VwVh_s1_rnd_satfunction
2896core::core_arch::hexagon::v64Q6_Vw_vmpyo_VwVh_s1_satfunction
2897core::core_arch::hexagon::v64Q6_Vw_vmpyoacc_VwVwVh_s1_rnd_sat_shiftfunction
2898core::core_arch::hexagon::v64Q6_Vw_vmpyoacc_VwVwVh_s1_sat_shiftfunction
2899core::core_arch::hexagon::v64Q6_Vw_vnavg_VwVwfunction
2900core::core_arch::hexagon::v64Q6_Vw_vnormamt_Vwfunction
2901core::core_arch::hexagon::v64Q6_Vw_vrmpy_VbVbfunction
2902core::core_arch::hexagon::v64Q6_Vw_vrmpy_VubRbfunction
2903core::core_arch::hexagon::v64Q6_Vw_vrmpy_VubVbfunction
2904core::core_arch::hexagon::v64Q6_Vw_vrmpyacc_VwVbVbfunction
2905core::core_arch::hexagon::v64Q6_Vw_vrmpyacc_VwVubRbfunction
2906core::core_arch::hexagon::v64Q6_Vw_vrmpyacc_VwVubVbfunction
2907core::core_arch::hexagon::v64Q6_Vw_vsatdw_VwVwfunction
2908core::core_arch::hexagon::v64Q6_Vw_vsub_VwVwfunction
2909core::core_arch::hexagon::v64Q6_Vw_vsub_VwVw_satfunction
2910core::core_arch::hexagon::v64Q6_W_equals_Wfunction
2911core::core_arch::hexagon::v64Q6_W_vcombine_VVfunction
2912core::core_arch::hexagon::v64Q6_W_vdeal_VVRfunction
2913core::core_arch::hexagon::v64Q6_W_vmpye_VwVuhfunction
2914core::core_arch::hexagon::v64Q6_W_vmpyoacc_WVwVhfunction
2915core::core_arch::hexagon::v64Q6_W_vshuff_VVRfunction
2916core::core_arch::hexagon::v64Q6_W_vswap_QVVfunction
2917core::core_arch::hexagon::v64Q6_W_vzerofunction
2918core::core_arch::hexagon::v64Q6_Wb_vadd_WbWbfunction
2919core::core_arch::hexagon::v64Q6_Wb_vadd_WbWb_satfunction
2920core::core_arch::hexagon::v64Q6_Wb_vshuffoe_VbVbfunction
2921core::core_arch::hexagon::v64Q6_Wb_vsub_WbWbfunction
2922core::core_arch::hexagon::v64Q6_Wb_vsub_WbWb_satfunction
2923core::core_arch::hexagon::v64Q6_Wh_vadd_VubVubfunction
2924core::core_arch::hexagon::v64Q6_Wh_vadd_WhWhfunction
2925core::core_arch::hexagon::v64Q6_Wh_vadd_WhWh_satfunction
2926core::core_arch::hexagon::v64Q6_Wh_vaddacc_WhVubVubfunction
2927core::core_arch::hexagon::v64Q6_Wh_vdmpy_WubRbfunction
2928core::core_arch::hexagon::v64Q6_Wh_vdmpyacc_WhWubRbfunction
2929core::core_arch::hexagon::v64Q6_Wh_vlut16_VbVhIfunction
2930core::core_arch::hexagon::v64Q6_Wh_vlut16_VbVhRfunction
2931core::core_arch::hexagon::v64Q6_Wh_vlut16_VbVhR_nomatchfunction
2932core::core_arch::hexagon::v64Q6_Wh_vlut16or_WhVbVhIfunction
2933core::core_arch::hexagon::v64Q6_Wh_vlut16or_WhVbVhRfunction
2934core::core_arch::hexagon::v64Q6_Wh_vmpa_WubRbfunction
2935core::core_arch::hexagon::v64Q6_Wh_vmpa_WubRubfunction
2936core::core_arch::hexagon::v64Q6_Wh_vmpa_WubWbfunction
2937core::core_arch::hexagon::v64Q6_Wh_vmpa_WubWubfunction
2938core::core_arch::hexagon::v64Q6_Wh_vmpaacc_WhWubRbfunction
2939core::core_arch::hexagon::v64Q6_Wh_vmpaacc_WhWubRubfunction
2940core::core_arch::hexagon::v64Q6_Wh_vmpy_VbVbfunction
2941core::core_arch::hexagon::v64Q6_Wh_vmpy_VubRbfunction
2942core::core_arch::hexagon::v64Q6_Wh_vmpy_VubVbfunction
2943core::core_arch::hexagon::v64Q6_Wh_vmpyacc_WhVbVbfunction
2944core::core_arch::hexagon::v64Q6_Wh_vmpyacc_WhVubRbfunction
2945core::core_arch::hexagon::v64Q6_Wh_vmpyacc_WhVubVbfunction
2946core::core_arch::hexagon::v64Q6_Wh_vshuffoe_VhVhfunction
2947core::core_arch::hexagon::v64Q6_Wh_vsub_VubVubfunction
2948core::core_arch::hexagon::v64Q6_Wh_vsub_WhWhfunction
2949core::core_arch::hexagon::v64Q6_Wh_vsub_WhWh_satfunction
2950core::core_arch::hexagon::v64Q6_Wh_vsxt_Vbfunction
2951core::core_arch::hexagon::v64Q6_Wh_vtmpy_WbRbfunction
2952core::core_arch::hexagon::v64Q6_Wh_vtmpy_WubRbfunction
2953core::core_arch::hexagon::v64Q6_Wh_vtmpyacc_WhWbRbfunction
2954core::core_arch::hexagon::v64Q6_Wh_vtmpyacc_WhWubRbfunction
2955core::core_arch::hexagon::v64Q6_Wh_vunpack_Vbfunction
2956core::core_arch::hexagon::v64Q6_Wh_vunpackoor_WhVbfunction
2957core::core_arch::hexagon::v64Q6_Whf_vcvt2_Vbfunction
2958core::core_arch::hexagon::v64Q6_Whf_vcvt2_Vubfunction
2959core::core_arch::hexagon::v64Q6_Whf_vcvt_Vfunction
2960core::core_arch::hexagon::v64Q6_Whf_vcvt_Vbfunction
2961core::core_arch::hexagon::v64Q6_Whf_vcvt_Vubfunction
2962core::core_arch::hexagon::v64Q6_Wqf32_vmpy_VhfVhffunction
2963core::core_arch::hexagon::v64Q6_Wqf32_vmpy_Vqf16Vhffunction
2964core::core_arch::hexagon::v64Q6_Wqf32_vmpy_Vqf16Vqf16function
2965core::core_arch::hexagon::v64Q6_Wsf_vadd_VhfVhffunction
2966core::core_arch::hexagon::v64Q6_Wsf_vcvt_Vhffunction
2967core::core_arch::hexagon::v64Q6_Wsf_vmpy_VhfVhffunction
2968core::core_arch::hexagon::v64Q6_Wsf_vmpyacc_WsfVhfVhffunction
2969core::core_arch::hexagon::v64Q6_Wsf_vsub_VhfVhffunction
2970core::core_arch::hexagon::v64Q6_Wub_vadd_WubWub_satfunction
2971core::core_arch::hexagon::v64Q6_Wub_vsub_WubWub_satfunction
2972core::core_arch::hexagon::v64Q6_Wuh_vadd_WuhWuh_satfunction
2973core::core_arch::hexagon::v64Q6_Wuh_vmpy_VubRubfunction
2974core::core_arch::hexagon::v64Q6_Wuh_vmpy_VubVubfunction
2975core::core_arch::hexagon::v64Q6_Wuh_vmpyacc_WuhVubRubfunction
2976core::core_arch::hexagon::v64Q6_Wuh_vmpyacc_WuhVubVubfunction
2977core::core_arch::hexagon::v64Q6_Wuh_vsub_WuhWuh_satfunction
2978core::core_arch::hexagon::v64Q6_Wuh_vunpack_Vubfunction
2979core::core_arch::hexagon::v64Q6_Wuh_vzxt_Vubfunction
2980core::core_arch::hexagon::v64Q6_Wuw_vadd_WuwWuw_satfunction
2981core::core_arch::hexagon::v64Q6_Wuw_vdsad_WuhRuhfunction
2982core::core_arch::hexagon::v64Q6_Wuw_vdsadacc_WuwWuhRuhfunction
2983core::core_arch::hexagon::v64Q6_Wuw_vmpy_VuhRuhfunction
2984core::core_arch::hexagon::v64Q6_Wuw_vmpy_VuhVuhfunction
2985core::core_arch::hexagon::v64Q6_Wuw_vmpyacc_WuwVuhRuhfunction
2986core::core_arch::hexagon::v64Q6_Wuw_vmpyacc_WuwVuhVuhfunction
2987core::core_arch::hexagon::v64Q6_Wuw_vrmpy_WubRubIfunction
2988core::core_arch::hexagon::v64Q6_Wuw_vrmpyacc_WuwWubRubIfunction
2989core::core_arch::hexagon::v64Q6_Wuw_vrsad_WubRubIfunction
2990core::core_arch::hexagon::v64Q6_Wuw_vrsadacc_WuwWubRubIfunction
2991core::core_arch::hexagon::v64Q6_Wuw_vsub_WuwWuw_satfunction
2992core::core_arch::hexagon::v64Q6_Wuw_vunpack_Vuhfunction
2993core::core_arch::hexagon::v64Q6_Wuw_vzxt_Vuhfunction
2994core::core_arch::hexagon::v64Q6_Ww_v6mpy_WubWbI_hfunction
2995core::core_arch::hexagon::v64Q6_Ww_v6mpy_WubWbI_vfunction
2996core::core_arch::hexagon::v64Q6_Ww_v6mpyacc_WwWubWbI_hfunction
2997core::core_arch::hexagon::v64Q6_Ww_v6mpyacc_WwWubWbI_vfunction
2998core::core_arch::hexagon::v64Q6_Ww_vadd_VhVhfunction
2999core::core_arch::hexagon::v64Q6_Ww_vadd_VuhVuhfunction
3000core::core_arch::hexagon::v64Q6_Ww_vadd_WwWwfunction
3001core::core_arch::hexagon::v64Q6_Ww_vadd_WwWw_satfunction
3002core::core_arch::hexagon::v64Q6_Ww_vaddacc_WwVhVhfunction
3003core::core_arch::hexagon::v64Q6_Ww_vaddacc_WwVuhVuhfunction
3004core::core_arch::hexagon::v64Q6_Ww_vasrinto_WwVwVwfunction
3005core::core_arch::hexagon::v64Q6_Ww_vdmpy_WhRbfunction
3006core::core_arch::hexagon::v64Q6_Ww_vdmpyacc_WwWhRbfunction
3007core::core_arch::hexagon::v64Q6_Ww_vmpa_WhRbfunction
3008core::core_arch::hexagon::v64Q6_Ww_vmpa_WuhRbfunction
3009core::core_arch::hexagon::v64Q6_Ww_vmpaacc_WwWhRbfunction
3010core::core_arch::hexagon::v64Q6_Ww_vmpaacc_WwWuhRbfunction
3011core::core_arch::hexagon::v64Q6_Ww_vmpy_VhRhfunction
3012core::core_arch::hexagon::v64Q6_Ww_vmpy_VhVhfunction
3013core::core_arch::hexagon::v64Q6_Ww_vmpy_VhVuhfunction
3014core::core_arch::hexagon::v64Q6_Ww_vmpyacc_WwVhRhfunction
3015core::core_arch::hexagon::v64Q6_Ww_vmpyacc_WwVhRh_satfunction
3016core::core_arch::hexagon::v64Q6_Ww_vmpyacc_WwVhVhfunction
3017core::core_arch::hexagon::v64Q6_Ww_vmpyacc_WwVhVuhfunction
3018core::core_arch::hexagon::v64Q6_Ww_vrmpy_WubRbIfunction
3019core::core_arch::hexagon::v64Q6_Ww_vrmpyacc_WwWubRbIfunction
3020core::core_arch::hexagon::v64Q6_Ww_vsub_VhVhfunction
3021core::core_arch::hexagon::v64Q6_Ww_vsub_VuhVuhfunction
3022core::core_arch::hexagon::v64Q6_Ww_vsub_WwWwfunction
3023core::core_arch::hexagon::v64Q6_Ww_vsub_WwWw_satfunction
3024core::core_arch::hexagon::v64Q6_Ww_vsxt_Vhfunction
3025core::core_arch::hexagon::v64Q6_Ww_vtmpy_WhRbfunction
3026core::core_arch::hexagon::v64Q6_Ww_vtmpyacc_WwWhRbfunction
3027core::core_arch::hexagon::v64Q6_Ww_vunpack_Vhfunction
3028core::core_arch::hexagon::v64Q6_Ww_vunpackoor_WwVhfunction
3029core::core_arch::hexagon::v64Q6_vgather_AQRMVhfunction
3030core::core_arch::hexagon::v64Q6_vgather_AQRMVwfunction
3031core::core_arch::hexagon::v64Q6_vgather_AQRMWwfunction
3032core::core_arch::hexagon::v64Q6_vgather_ARMVhfunction
3033core::core_arch::hexagon::v64Q6_vgather_ARMVwfunction
3034core::core_arch::hexagon::v64Q6_vgather_ARMWwfunction
3035core::core_arch::hexagon::v64Q6_vmem_QRIVfunction
3036core::core_arch::hexagon::v64Q6_vmem_QRIV_ntfunction
3037core::core_arch::hexagon::v64Q6_vmem_QnRIVfunction
3038core::core_arch::hexagon::v64Q6_vmem_QnRIV_ntfunction
3039core::core_arch::hexagon::v64Q6_vscatter_QRMVhVfunction
3040core::core_arch::hexagon::v64Q6_vscatter_QRMVwVfunction
3041core::core_arch::hexagon::v64Q6_vscatter_QRMWwVfunction
3042core::core_arch::hexagon::v64Q6_vscatter_RMVhVfunction
3043core::core_arch::hexagon::v64Q6_vscatter_RMVwVfunction
3044core::core_arch::hexagon::v64Q6_vscatter_RMWwVfunction
3045core::core_arch::hexagon::v64Q6_vscatteracc_RMVhVfunction
3046core::core_arch::hexagon::v64Q6_vscatteracc_RMVwVfunction
3047core::core_arch::hexagon::v64Q6_vscatteracc_RMWwVfunction
3048core::core_arch::loongarch32cacopfunction
3049core::core_arch::loongarch32csrrdfunction
3050core::core_arch::loongarch32csrwrfunction
3051core::core_arch::loongarch32csrxchgfunction
3052core::core_arch::loongarch64asrtgtfunction
3053core::core_arch::loongarch64asrtlefunction
3054core::core_arch::loongarch64cacopfunction
3055core::core_arch::loongarch64csrrdfunction
3056core::core_arch::loongarch64csrwrfunction
3057core::core_arch::loongarch64csrxchgfunction
3058core::core_arch::loongarch64iocsrrd_dfunction
3059core::core_arch::loongarch64iocsrwr_dfunction
3060core::core_arch::loongarch64lddirfunction
3061core::core_arch::loongarch64ldptefunction
3062core::core_arch::loongarch64::lasx::generatedlasx_xvldfunction
3063core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_bfunction
3064core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_dfunction
3065core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_hfunction
3066core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_wfunction
3067core::core_arch::loongarch64::lasx::generatedlasx_xvldxfunction
3068core::core_arch::loongarch64::lasx::generatedlasx_xvstfunction
3069core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_bfunction
3070core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_dfunction
3071core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_hfunction
3072core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_wfunction
3073core::core_arch::loongarch64::lasx::generatedlasx_xvstxfunction
3074core::core_arch::loongarch64::lsx::generatedlsx_vldfunction
3075core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_bfunction
3076core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_dfunction
3077core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_hfunction
3078core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_wfunction
3079core::core_arch::loongarch64::lsx::generatedlsx_vldxfunction
3080core::core_arch::loongarch64::lsx::generatedlsx_vstfunction
3081core::core_arch::loongarch64::lsx::generatedlsx_vstelm_bfunction
3082core::core_arch::loongarch64::lsx::generatedlsx_vstelm_dfunction
3083core::core_arch::loongarch64::lsx::generatedlsx_vstelm_hfunction
3084core::core_arch::loongarch64::lsx::generatedlsx_vstelm_wfunction
3085core::core_arch::loongarch64::lsx::generatedlsx_vstxfunction
3086core::core_arch::loongarch_sharedbrkfunction
3087core::core_arch::loongarch_sharediocsrrd_bfunction
3088core::core_arch::loongarch_sharediocsrrd_hfunction
3089core::core_arch::loongarch_sharediocsrrd_wfunction
3090core::core_arch::loongarch_sharediocsrwr_bfunction
3091core::core_arch::loongarch_sharediocsrwr_hfunction
3092core::core_arch::loongarch_sharediocsrwr_wfunction
3093core::core_arch::loongarch_sharedmovgr2fcsrfunction
3094core::core_arch::loongarch_sharedsyscallfunction
3095core::core_arch::mipsbreak_function
3096core::core_arch::nvptx__assert_failfunction
3097core::core_arch::nvptx_block_dim_xfunction
3098core::core_arch::nvptx_block_dim_yfunction
3099core::core_arch::nvptx_block_dim_zfunction
3100core::core_arch::nvptx_block_idx_xfunction
3101core::core_arch::nvptx_block_idx_yfunction
3102core::core_arch::nvptx_block_idx_zfunction
3103core::core_arch::nvptx_grid_dim_xfunction
3104core::core_arch::nvptx_grid_dim_yfunction
3105core::core_arch::nvptx_grid_dim_zfunction
3106core::core_arch::nvptx_syncthreadsfunction
3107core::core_arch::nvptx_thread_idx_xfunction
3108core::core_arch::nvptx_thread_idx_yfunction
3109core::core_arch::nvptx_thread_idx_zfunction
3110core::core_arch::nvptxfreefunction
3111core::core_arch::nvptxmallocfunction
3112core::core_arch::nvptxtrapfunction
3113core::core_arch::nvptxvprintffunction
3114core::core_arch::nvptx::packedf16x2_addfunction
3115core::core_arch::nvptx::packedf16x2_fmafunction
3116core::core_arch::nvptx::packedf16x2_maxfunction
3117core::core_arch::nvptx::packedf16x2_max_nanfunction
3118core::core_arch::nvptx::packedf16x2_minfunction
3119core::core_arch::nvptx::packedf16x2_min_nanfunction
3120core::core_arch::nvptx::packedf16x2_mulfunction
3121core::core_arch::nvptx::packedf16x2_negfunction
3122core::core_arch::nvptx::packedf16x2_subfunction
3123core::core_arch::powerpctrapfunction
3124core::core_arch::powerpc64::vsxvec_xl_lenfunction
3125core::core_arch::powerpc64::vsxvec_xst_lenfunction
3126core::core_arch::powerpc::altivecvec_absfunction
3127core::core_arch::powerpc::altivecvec_abssfunction
3128core::core_arch::powerpc::altivecvec_addfunction
3129core::core_arch::powerpc::altivecvec_addcfunction
3130core::core_arch::powerpc::altivecvec_addefunction
3131core::core_arch::powerpc::altivecvec_addsfunction
3132core::core_arch::powerpc::altivecvec_all_eqfunction
3133core::core_arch::powerpc::altivecvec_all_gefunction
3134core::core_arch::powerpc::altivecvec_all_gtfunction
3135core::core_arch::powerpc::altivecvec_all_infunction
3136core::core_arch::powerpc::altivecvec_all_lefunction
3137core::core_arch::powerpc::altivecvec_all_ltfunction
3138core::core_arch::powerpc::altivecvec_all_nanfunction
3139core::core_arch::powerpc::altivecvec_all_nefunction
3140core::core_arch::powerpc::altivecvec_all_ngefunction
3141core::core_arch::powerpc::altivecvec_all_ngtfunction
3142core::core_arch::powerpc::altivecvec_all_nlefunction
3143core::core_arch::powerpc::altivecvec_all_nltfunction
3144core::core_arch::powerpc::altivecvec_all_numericfunction
3145core::core_arch::powerpc::altivecvec_andfunction
3146core::core_arch::powerpc::altivecvec_andcfunction
3147core::core_arch::powerpc::altivecvec_any_eqfunction
3148core::core_arch::powerpc::altivecvec_any_gefunction
3149core::core_arch::powerpc::altivecvec_any_gtfunction
3150core::core_arch::powerpc::altivecvec_any_lefunction
3151core::core_arch::powerpc::altivecvec_any_ltfunction
3152core::core_arch::powerpc::altivecvec_any_nanfunction
3153core::core_arch::powerpc::altivecvec_any_nefunction
3154core::core_arch::powerpc::altivecvec_any_ngefunction
3155core::core_arch::powerpc::altivecvec_any_ngtfunction
3156core::core_arch::powerpc::altivecvec_any_nlefunction
3157core::core_arch::powerpc::altivecvec_any_nltfunction
3158core::core_arch::powerpc::altivecvec_any_numericfunction
3159core::core_arch::powerpc::altivecvec_any_outfunction
3160core::core_arch::powerpc::altivecvec_avgfunction
3161core::core_arch::powerpc::altivecvec_ceilfunction
3162core::core_arch::powerpc::altivecvec_cmpbfunction
3163core::core_arch::powerpc::altivecvec_cmpeqfunction
3164core::core_arch::powerpc::altivecvec_cmpgefunction
3165core::core_arch::powerpc::altivecvec_cmpgtfunction
3166core::core_arch::powerpc::altivecvec_cmplefunction
3167core::core_arch::powerpc::altivecvec_cmpltfunction
3168core::core_arch::powerpc::altivecvec_cmpnefunction
3169core::core_arch::powerpc::altivecvec_cntlzfunction
3170core::core_arch::powerpc::altivecvec_ctffunction
3171core::core_arch::powerpc::altivecvec_ctsfunction
3172core::core_arch::powerpc::altivecvec_ctufunction
3173core::core_arch::powerpc::altivecvec_exptefunction
3174core::core_arch::powerpc::altivecvec_extractfunction
3175core::core_arch::powerpc::altivecvec_floorfunction
3176core::core_arch::powerpc::altivecvec_insertfunction
3177core::core_arch::powerpc::altivecvec_ldfunction
3178core::core_arch::powerpc::altivecvec_ldefunction
3179core::core_arch::powerpc::altivecvec_ldlfunction
3180core::core_arch::powerpc::altivecvec_logefunction
3181core::core_arch::powerpc::altivecvec_maddfunction
3182core::core_arch::powerpc::altivecvec_maddsfunction
3183core::core_arch::powerpc::altivecvec_maxfunction
3184core::core_arch::powerpc::altivecvec_mergehfunction
3185core::core_arch::powerpc::altivecvec_mergelfunction
3186core::core_arch::powerpc::altivecvec_mfvscrfunction
3187core::core_arch::powerpc::altivecvec_minfunction
3188core::core_arch::powerpc::altivecvec_mladdfunction
3189core::core_arch::powerpc::altivecvec_mraddsfunction
3190core::core_arch::powerpc::altivecvec_msumfunction
3191core::core_arch::powerpc::altivecvec_msumsfunction
3192core::core_arch::powerpc::altivecvec_mulfunction
3193core::core_arch::powerpc::altivecvec_nandfunction
3194core::core_arch::powerpc::altivecvec_negfunction
3195core::core_arch::powerpc::altivecvec_nmsubfunction
3196core::core_arch::powerpc::altivecvec_norfunction
3197core::core_arch::powerpc::altivecvec_orfunction
3198core::core_arch::powerpc::altivecvec_orcfunction
3199core::core_arch::powerpc::altivecvec_packfunction
3200core::core_arch::powerpc::altivecvec_packsfunction
3201core::core_arch::powerpc::altivecvec_packsufunction
3202core::core_arch::powerpc::altivecvec_rlfunction
3203core::core_arch::powerpc::altivecvec_roundfunction
3204core::core_arch::powerpc::altivecvec_selfunction
3205core::core_arch::powerpc::altivecvec_slfunction
3206core::core_arch::powerpc::altivecvec_sldfunction
3207core::core_arch::powerpc::altivecvec_sldwfunction
3208core::core_arch::powerpc::altivecvec_sllfunction
3209core::core_arch::powerpc::altivecvec_slofunction
3210core::core_arch::powerpc::altivecvec_slvfunction
3211core::core_arch::powerpc::altivecvec_splatfunction
3212core::core_arch::powerpc::altivecvec_splat_s16function
3213core::core_arch::powerpc::altivecvec_splat_s32function
3214core::core_arch::powerpc::altivecvec_splat_s8function
3215core::core_arch::powerpc::altivecvec_splat_u16function
3216core::core_arch::powerpc::altivecvec_splat_u32function
3217core::core_arch::powerpc::altivecvec_splat_u8function
3218core::core_arch::powerpc::altivecvec_splatsfunction
3219core::core_arch::powerpc::altivecvec_srfunction
3220core::core_arch::powerpc::altivecvec_srafunction
3221core::core_arch::powerpc::altivecvec_srlfunction
3222core::core_arch::powerpc::altivecvec_srofunction
3223core::core_arch::powerpc::altivecvec_srvfunction
3224core::core_arch::powerpc::altivecvec_stfunction
3225core::core_arch::powerpc::altivecvec_stefunction
3226core::core_arch::powerpc::altivecvec_stlfunction
3227core::core_arch::powerpc::altivecvec_subfunction
3228core::core_arch::powerpc::altivecvec_subcfunction
3229core::core_arch::powerpc::altivecvec_subsfunction
3230core::core_arch::powerpc::altivecvec_sum4sfunction
3231core::core_arch::powerpc::altivecvec_unpackhfunction
3232core::core_arch::powerpc::altivecvec_unpacklfunction
3233core::core_arch::powerpc::altivecvec_xlfunction
3234core::core_arch::powerpc::altivecvec_xorfunction
3235core::core_arch::powerpc::altivecvec_xstfunction
3236core::core_arch::powerpc::altivec::endianvec_mulefunction
3237core::core_arch::powerpc::altivec::endianvec_mulofunction
3238core::core_arch::powerpc::altivec::endianvec_permfunction
3239core::core_arch::powerpc::altivec::endianvec_sum2sfunction
3240core::core_arch::powerpc::vsxvec_mergeefunction
3241core::core_arch::powerpc::vsxvec_mergeofunction
3242core::core_arch::powerpc::vsxvec_xxpermdifunction
3243core::core_arch::riscv64hlv_dfunction
3244core::core_arch::riscv64hlv_wufunction
3245core::core_arch::riscv64hsv_dfunction
3246core::core_arch::riscv_sharedfence_ifunction
3247core::core_arch::riscv_sharedhfence_gvmafunction
3248core::core_arch::riscv_sharedhfence_gvma_allfunction
3249core::core_arch::riscv_sharedhfence_gvma_gaddrfunction
3250core::core_arch::riscv_sharedhfence_gvma_vmidfunction
3251core::core_arch::riscv_sharedhfence_vvmafunction
3252core::core_arch::riscv_sharedhfence_vvma_allfunction
3253core::core_arch::riscv_sharedhfence_vvma_asidfunction
3254core::core_arch::riscv_sharedhfence_vvma_vaddrfunction
3255core::core_arch::riscv_sharedhinval_gvmafunction
3256core::core_arch::riscv_sharedhinval_gvma_allfunction
3257core::core_arch::riscv_sharedhinval_gvma_gaddrfunction
3258core::core_arch::riscv_sharedhinval_gvma_vmidfunction
3259core::core_arch::riscv_sharedhinval_vvmafunction
3260core::core_arch::riscv_sharedhinval_vvma_allfunction
3261core::core_arch::riscv_sharedhinval_vvma_asidfunction
3262core::core_arch::riscv_sharedhinval_vvma_vaddrfunction
3263core::core_arch::riscv_sharedhlv_bfunction
3264core::core_arch::riscv_sharedhlv_bufunction
3265core::core_arch::riscv_sharedhlv_hfunction
3266core::core_arch::riscv_sharedhlv_hufunction
3267core::core_arch::riscv_sharedhlv_wfunction
3268core::core_arch::riscv_sharedhlvx_hufunction
3269core::core_arch::riscv_sharedhlvx_wufunction
3270core::core_arch::riscv_sharedhsv_bfunction
3271core::core_arch::riscv_sharedhsv_hfunction
3272core::core_arch::riscv_sharedhsv_wfunction
3273core::core_arch::riscv_sharedsfence_inval_irfunction
3274core::core_arch::riscv_sharedsfence_vmafunction
3275core::core_arch::riscv_sharedsfence_vma_allfunction
3276core::core_arch::riscv_sharedsfence_vma_asidfunction
3277core::core_arch::riscv_sharedsfence_vma_vaddrfunction
3278core::core_arch::riscv_sharedsfence_w_invalfunction
3279core::core_arch::riscv_sharedsinval_vmafunction
3280core::core_arch::riscv_sharedsinval_vma_allfunction
3281core::core_arch::riscv_sharedsinval_vma_asidfunction
3282core::core_arch::riscv_sharedsinval_vma_vaddrfunction
3283core::core_arch::riscv_sharedwfifunction
3284core::core_arch::s390x::vectorvec_absfunction
3285core::core_arch::s390x::vectorvec_addfunction
3286core::core_arch::s390x::vectorvec_add_u128function
3287core::core_arch::s390x::vectorvec_addc_u128function
3288core::core_arch::s390x::vectorvec_adde_u128function
3289core::core_arch::s390x::vectorvec_addec_u128function
3290core::core_arch::s390x::vectorvec_all_eqfunction
3291core::core_arch::s390x::vectorvec_all_gefunction
3292core::core_arch::s390x::vectorvec_all_gtfunction
3293core::core_arch::s390x::vectorvec_all_lefunction
3294core::core_arch::s390x::vectorvec_all_ltfunction
3295core::core_arch::s390x::vectorvec_all_nanfunction
3296core::core_arch::s390x::vectorvec_all_nefunction
3297core::core_arch::s390x::vectorvec_all_ngefunction
3298core::core_arch::s390x::vectorvec_all_ngtfunction
3299core::core_arch::s390x::vectorvec_all_nlefunction
3300core::core_arch::s390x::vectorvec_all_nltfunction
3301core::core_arch::s390x::vectorvec_all_numericfunction
3302core::core_arch::s390x::vectorvec_andfunction
3303core::core_arch::s390x::vectorvec_andcfunction
3304core::core_arch::s390x::vectorvec_any_eqfunction
3305core::core_arch::s390x::vectorvec_any_gefunction
3306core::core_arch::s390x::vectorvec_any_gtfunction
3307core::core_arch::s390x::vectorvec_any_lefunction
3308core::core_arch::s390x::vectorvec_any_ltfunction
3309core::core_arch::s390x::vectorvec_any_nanfunction
3310core::core_arch::s390x::vectorvec_any_nefunction
3311core::core_arch::s390x::vectorvec_any_ngefunction
3312core::core_arch::s390x::vectorvec_any_ngtfunction
3313core::core_arch::s390x::vectorvec_any_nlefunction
3314core::core_arch::s390x::vectorvec_any_nltfunction
3315core::core_arch::s390x::vectorvec_any_numericfunction
3316core::core_arch::s390x::vectorvec_avgfunction
3317core::core_arch::s390x::vectorvec_bperm_u128function
3318core::core_arch::s390x::vectorvec_ceilfunction
3319core::core_arch::s390x::vectorvec_checksumfunction
3320core::core_arch::s390x::vectorvec_cmpeqfunction
3321core::core_arch::s390x::vectorvec_cmpeq_idxfunction
3322core::core_arch::s390x::vectorvec_cmpeq_idx_ccfunction
3323core::core_arch::s390x::vectorvec_cmpeq_or_0_idxfunction
3324core::core_arch::s390x::vectorvec_cmpeq_or_0_idx_ccfunction
3325core::core_arch::s390x::vectorvec_cmpgefunction
3326core::core_arch::s390x::vectorvec_cmpgtfunction
3327core::core_arch::s390x::vectorvec_cmplefunction
3328core::core_arch::s390x::vectorvec_cmpltfunction
3329core::core_arch::s390x::vectorvec_cmpnefunction
3330core::core_arch::s390x::vectorvec_cmpne_idxfunction
3331core::core_arch::s390x::vectorvec_cmpne_idx_ccfunction
3332core::core_arch::s390x::vectorvec_cmpne_or_0_idxfunction
3333core::core_arch::s390x::vectorvec_cmpne_or_0_idx_ccfunction
3334core::core_arch::s390x::vectorvec_cmpnrgfunction
3335core::core_arch::s390x::vectorvec_cmpnrg_ccfunction
3336core::core_arch::s390x::vectorvec_cmpnrg_idxfunction
3337core::core_arch::s390x::vectorvec_cmpnrg_idx_ccfunction
3338core::core_arch::s390x::vectorvec_cmpnrg_or_0_idxfunction
3339core::core_arch::s390x::vectorvec_cmpnrg_or_0_idx_ccfunction
3340core::core_arch::s390x::vectorvec_cmprgfunction
3341core::core_arch::s390x::vectorvec_cmprg_ccfunction
3342core::core_arch::s390x::vectorvec_cmprg_idxfunction
3343core::core_arch::s390x::vectorvec_cmprg_idx_ccfunction
3344core::core_arch::s390x::vectorvec_cmprg_or_0_idxfunction
3345core::core_arch::s390x::vectorvec_cmprg_or_0_idx_ccfunction
3346core::core_arch::s390x::vectorvec_cntlzfunction
3347core::core_arch::s390x::vectorvec_cnttzfunction
3348core::core_arch::s390x::vectorvec_convert_from_fp16function
3349core::core_arch::s390x::vectorvec_convert_to_fp16function
3350core::core_arch::s390x::vectorvec_cp_until_zerofunction
3351core::core_arch::s390x::vectorvec_cp_until_zero_ccfunction
3352core::core_arch::s390x::vectorvec_doublefunction
3353core::core_arch::s390x::vectorvec_doubleefunction
3354core::core_arch::s390x::vectorvec_eqvfunction
3355core::core_arch::s390x::vectorvec_extend_s64function
3356core::core_arch::s390x::vectorvec_extend_to_fp32_hifunction
3357core::core_arch::s390x::vectorvec_extend_to_fp32_lofunction
3358core::core_arch::s390x::vectorvec_extractfunction
3359core::core_arch::s390x::vectorvec_find_any_eqfunction
3360core::core_arch::s390x::vectorvec_find_any_eq_ccfunction
3361core::core_arch::s390x::vectorvec_find_any_eq_idxfunction
3362core::core_arch::s390x::vectorvec_find_any_eq_idx_ccfunction
3363core::core_arch::s390x::vectorvec_find_any_eq_or_0_idxfunction
3364core::core_arch::s390x::vectorvec_find_any_eq_or_0_idx_ccfunction
3365core::core_arch::s390x::vectorvec_find_any_nefunction
3366core::core_arch::s390x::vectorvec_find_any_ne_ccfunction
3367core::core_arch::s390x::vectorvec_find_any_ne_idxfunction
3368core::core_arch::s390x::vectorvec_find_any_ne_idx_ccfunction
3369core::core_arch::s390x::vectorvec_find_any_ne_or_0_idxfunction
3370core::core_arch::s390x::vectorvec_find_any_ne_or_0_idx_ccfunction
3371core::core_arch::s390x::vectorvec_floatfunction
3372core::core_arch::s390x::vectorvec_floatefunction
3373core::core_arch::s390x::vectorvec_floorfunction
3374core::core_arch::s390x::vectorvec_fp_test_data_classfunction
3375core::core_arch::s390x::vectorvec_gather_elementfunction
3376core::core_arch::s390x::vectorvec_genmaskfunction
3377core::core_arch::s390x::vectorvec_genmasks_16function
3378core::core_arch::s390x::vectorvec_genmasks_32function
3379core::core_arch::s390x::vectorvec_genmasks_64function
3380core::core_arch::s390x::vectorvec_genmasks_8function
3381core::core_arch::s390x::vectorvec_gfmsumfunction
3382core::core_arch::s390x::vectorvec_gfmsum_128function
3383core::core_arch::s390x::vectorvec_gfmsum_accumfunction
3384core::core_arch::s390x::vectorvec_gfmsum_accum_128function
3385core::core_arch::s390x::vectorvec_insertfunction
3386core::core_arch::s390x::vectorvec_insert_and_zerofunction
3387core::core_arch::s390x::vectorvec_load_bndryfunction
3388core::core_arch::s390x::vectorvec_load_lenfunction
3389core::core_arch::s390x::vectorvec_load_len_rfunction
3390core::core_arch::s390x::vectorvec_load_pairfunction
3391core::core_arch::s390x::vectorvec_maddfunction
3392core::core_arch::s390x::vectorvec_maxfunction
3393core::core_arch::s390x::vectorvec_meaddfunction
3394core::core_arch::s390x::vectorvec_mergehfunction
3395core::core_arch::s390x::vectorvec_mergelfunction
3396core::core_arch::s390x::vectorvec_mhaddfunction
3397core::core_arch::s390x::vectorvec_minfunction
3398core::core_arch::s390x::vectorvec_mladdfunction
3399core::core_arch::s390x::vectorvec_moaddfunction
3400core::core_arch::s390x::vectorvec_msubfunction
3401core::core_arch::s390x::vectorvec_msum_u128function
3402core::core_arch::s390x::vectorvec_mulfunction
3403core::core_arch::s390x::vectorvec_mulefunction
3404core::core_arch::s390x::vectorvec_mulhfunction
3405core::core_arch::s390x::vectorvec_mulofunction
3406core::core_arch::s390x::vectorvec_nabsfunction
3407core::core_arch::s390x::vectorvec_nandfunction
3408core::core_arch::s390x::vectorvec_negfunction
3409core::core_arch::s390x::vectorvec_nmaddfunction
3410core::core_arch::s390x::vectorvec_nmsubfunction
3411core::core_arch::s390x::vectorvec_norfunction
3412core::core_arch::s390x::vectorvec_orfunction
3413core::core_arch::s390x::vectorvec_orcfunction
3414core::core_arch::s390x::vectorvec_packfunction
3415core::core_arch::s390x::vectorvec_packsfunction
3416core::core_arch::s390x::vectorvec_packs_ccfunction
3417core::core_arch::s390x::vectorvec_packsufunction
3418core::core_arch::s390x::vectorvec_packsu_ccfunction
3419core::core_arch::s390x::vectorvec_permfunction
3420core::core_arch::s390x::vectorvec_popcntfunction
3421core::core_arch::s390x::vectorvec_promotefunction
3422core::core_arch::s390x::vectorvec_revbfunction
3423core::core_arch::s390x::vectorvec_revefunction
3424core::core_arch::s390x::vectorvec_rintfunction
3425core::core_arch::s390x::vectorvec_rlfunction
3426core::core_arch::s390x::vectorvec_rlifunction
3427core::core_arch::s390x::vectorvec_roundfunction
3428core::core_arch::s390x::vectorvec_round_from_fp32function
3429core::core_arch::s390x::vectorvec_roundcfunction
3430core::core_arch::s390x::vectorvec_roundmfunction
3431core::core_arch::s390x::vectorvec_roundpfunction
3432core::core_arch::s390x::vectorvec_roundzfunction
3433core::core_arch::s390x::vectorvec_search_string_ccfunction
3434core::core_arch::s390x::vectorvec_search_string_until_zero_ccfunction
3435core::core_arch::s390x::vectorvec_selfunction
3436core::core_arch::s390x::vectorvec_signedfunction
3437core::core_arch::s390x::vectorvec_slfunction
3438core::core_arch::s390x::vectorvec_slbfunction
3439core::core_arch::s390x::vectorvec_sldfunction
3440core::core_arch::s390x::vectorvec_sldbfunction
3441core::core_arch::s390x::vectorvec_sldwfunction
3442core::core_arch::s390x::vectorvec_sllfunction
3443core::core_arch::s390x::vectorvec_splatfunction
3444core::core_arch::s390x::vectorvec_splat_s16function
3445core::core_arch::s390x::vectorvec_splat_s32function
3446core::core_arch::s390x::vectorvec_splat_s64function
3447core::core_arch::s390x::vectorvec_splat_s8function
3448core::core_arch::s390x::vectorvec_splat_u16function
3449core::core_arch::s390x::vectorvec_splat_u32function
3450core::core_arch::s390x::vectorvec_splat_u64function
3451core::core_arch::s390x::vectorvec_splat_u8function
3452core::core_arch::s390x::vectorvec_splatsfunction
3453core::core_arch::s390x::vectorvec_sqrtfunction
3454core::core_arch::s390x::vectorvec_srfunction
3455core::core_arch::s390x::vectorvec_srafunction
3456core::core_arch::s390x::vectorvec_srabfunction
3457core::core_arch::s390x::vectorvec_sralfunction
3458core::core_arch::s390x::vectorvec_srbfunction
3459core::core_arch::s390x::vectorvec_srdbfunction
3460core::core_arch::s390x::vectorvec_srlfunction
3461core::core_arch::s390x::vectorvec_store_lenfunction
3462core::core_arch::s390x::vectorvec_store_len_rfunction
3463core::core_arch::s390x::vectorvec_subfunction
3464core::core_arch::s390x::vectorvec_sub_u128function
3465core::core_arch::s390x::vectorvec_subcfunction
3466core::core_arch::s390x::vectorvec_subc_u128function
3467core::core_arch::s390x::vectorvec_sube_u128function
3468core::core_arch::s390x::vectorvec_subec_u128function
3469core::core_arch::s390x::vectorvec_sum2function
3470core::core_arch::s390x::vectorvec_sum4function
3471core::core_arch::s390x::vectorvec_sum_u128function
3472core::core_arch::s390x::vectorvec_test_maskfunction
3473core::core_arch::s390x::vectorvec_truncfunction
3474core::core_arch::s390x::vectorvec_unpackhfunction
3475core::core_arch::s390x::vectorvec_unpacklfunction
3476core::core_arch::s390x::vectorvec_unsignedfunction
3477core::core_arch::s390x::vectorvec_xlfunction
3478core::core_arch::s390x::vectorvec_xorfunction
3479core::core_arch::s390x::vectorvec_xstfunction
3480core::core_arch::wasm32::atomicmemory_atomic_notifyfunction
3481core::core_arch::wasm32::atomicmemory_atomic_wait32function
3482core::core_arch::wasm32::atomicmemory_atomic_wait64function
3483core::core_arch::wasm32::simd128i16x8_load_extend_i8x8function
3484core::core_arch::wasm32::simd128i16x8_load_extend_u8x8function
3485core::core_arch::wasm32::simd128i32x4_load_extend_i16x4function
3486core::core_arch::wasm32::simd128i32x4_load_extend_u16x4function
3487core::core_arch::wasm32::simd128i64x2_load_extend_i32x2function
3488core::core_arch::wasm32::simd128i64x2_load_extend_u32x2function
3489core::core_arch::wasm32::simd128v128_loadfunction
3490core::core_arch::wasm32::simd128v128_load16_lanefunction
3491core::core_arch::wasm32::simd128v128_load16_splatfunction
3492core::core_arch::wasm32::simd128v128_load32_lanefunction
3493core::core_arch::wasm32::simd128v128_load32_splatfunction
3494core::core_arch::wasm32::simd128v128_load32_zerofunction
3495core::core_arch::wasm32::simd128v128_load64_lanefunction
3496core::core_arch::wasm32::simd128v128_load64_splatfunction
3497core::core_arch::wasm32::simd128v128_load64_zerofunction
3498core::core_arch::wasm32::simd128v128_load8_lanefunction
3499core::core_arch::wasm32::simd128v128_load8_splatfunction
3500core::core_arch::wasm32::simd128v128_storefunction
3501core::core_arch::wasm32::simd128v128_store16_lanefunction
3502core::core_arch::wasm32::simd128v128_store32_lanefunction
3503core::core_arch::wasm32::simd128v128_store64_lanefunction
3504core::core_arch::wasm32::simd128v128_store8_lanefunction
3505core::core_arch::x86::avx_mm256_lddqu_si256function
3506core::core_arch::x86::avx_mm256_load_pdfunction
3507core::core_arch::x86::avx_mm256_load_psfunction
3508core::core_arch::x86::avx_mm256_load_si256function
3509core::core_arch::x86::avx_mm256_loadu2_m128function
3510core::core_arch::x86::avx_mm256_loadu2_m128dfunction
3511core::core_arch::x86::avx_mm256_loadu2_m128ifunction
3512core::core_arch::x86::avx_mm256_loadu_pdfunction
3513core::core_arch::x86::avx_mm256_loadu_psfunction
3514core::core_arch::x86::avx_mm256_loadu_si256function
3515core::core_arch::x86::avx_mm256_maskload_pdfunction
3516core::core_arch::x86::avx_mm256_maskload_psfunction
3517core::core_arch::x86::avx_mm256_maskstore_pdfunction
3518core::core_arch::x86::avx_mm256_maskstore_psfunction
3519core::core_arch::x86::avx_mm256_store_pdfunction
3520core::core_arch::x86::avx_mm256_store_psfunction
3521core::core_arch::x86::avx_mm256_store_si256function
3522core::core_arch::x86::avx_mm256_storeu2_m128function
3523core::core_arch::x86::avx_mm256_storeu2_m128dfunction
3524core::core_arch::x86::avx_mm256_storeu2_m128ifunction
3525core::core_arch::x86::avx_mm256_storeu_pdfunction
3526core::core_arch::x86::avx_mm256_storeu_psfunction
3527core::core_arch::x86::avx_mm256_storeu_si256function
3528core::core_arch::x86::avx_mm256_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3529core::core_arch::x86::avx_mm256_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3530core::core_arch::x86::avx_mm256_stream_si256functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3531core::core_arch::x86::avx_mm_maskload_pdfunction
3532core::core_arch::x86::avx_mm_maskload_psfunction
3533core::core_arch::x86::avx_mm_maskstore_pdfunction
3534core::core_arch::x86::avx_mm_maskstore_psfunction
3535core::core_arch::x86::avx2_mm256_i32gather_epi32function
3536core::core_arch::x86::avx2_mm256_i32gather_epi64function
3537core::core_arch::x86::avx2_mm256_i32gather_pdfunction
3538core::core_arch::x86::avx2_mm256_i32gather_psfunction
3539core::core_arch::x86::avx2_mm256_i64gather_epi32function
3540core::core_arch::x86::avx2_mm256_i64gather_epi64function
3541core::core_arch::x86::avx2_mm256_i64gather_pdfunction
3542core::core_arch::x86::avx2_mm256_i64gather_psfunction
3543core::core_arch::x86::avx2_mm256_mask_i32gather_epi32function
3544core::core_arch::x86::avx2_mm256_mask_i32gather_epi64function
3545core::core_arch::x86::avx2_mm256_mask_i32gather_pdfunction
3546core::core_arch::x86::avx2_mm256_mask_i32gather_psfunction
3547core::core_arch::x86::avx2_mm256_mask_i64gather_epi32function
3548core::core_arch::x86::avx2_mm256_mask_i64gather_epi64function
3549core::core_arch::x86::avx2_mm256_mask_i64gather_pdfunction
3550core::core_arch::x86::avx2_mm256_mask_i64gather_psfunction
3551core::core_arch::x86::avx2_mm256_maskload_epi32function
3552core::core_arch::x86::avx2_mm256_maskload_epi64function
3553core::core_arch::x86::avx2_mm256_maskstore_epi32function
3554core::core_arch::x86::avx2_mm256_maskstore_epi64function
3555core::core_arch::x86::avx2_mm256_stream_load_si256function
3556core::core_arch::x86::avx2_mm_i32gather_epi32function
3557core::core_arch::x86::avx2_mm_i32gather_epi64function
3558core::core_arch::x86::avx2_mm_i32gather_pdfunction
3559core::core_arch::x86::avx2_mm_i32gather_psfunction
3560core::core_arch::x86::avx2_mm_i64gather_epi32function
3561core::core_arch::x86::avx2_mm_i64gather_epi64function
3562core::core_arch::x86::avx2_mm_i64gather_pdfunction
3563core::core_arch::x86::avx2_mm_i64gather_psfunction
3564core::core_arch::x86::avx2_mm_mask_i32gather_epi32function
3565core::core_arch::x86::avx2_mm_mask_i32gather_epi64function
3566core::core_arch::x86::avx2_mm_mask_i32gather_pdfunction
3567core::core_arch::x86::avx2_mm_mask_i32gather_psfunction
3568core::core_arch::x86::avx2_mm_mask_i64gather_epi32function
3569core::core_arch::x86::avx2_mm_mask_i64gather_epi64function
3570core::core_arch::x86::avx2_mm_mask_i64gather_pdfunction
3571core::core_arch::x86::avx2_mm_mask_i64gather_psfunction
3572core::core_arch::x86::avx2_mm_maskload_epi32function
3573core::core_arch::x86::avx2_mm_maskload_epi64function
3574core::core_arch::x86::avx2_mm_maskstore_epi32function
3575core::core_arch::x86::avx2_mm_maskstore_epi64function
3576core::core_arch::x86::avx512bw_kortest_mask32_u8function
3577core::core_arch::x86::avx512bw_kortest_mask64_u8function
3578core::core_arch::x86::avx512bw_ktest_mask32_u8function
3579core::core_arch::x86::avx512bw_ktest_mask64_u8function
3580core::core_arch::x86::avx512bw_load_mask32function
3581core::core_arch::x86::avx512bw_load_mask64function
3582core::core_arch::x86::avx512bw_mm256_loadu_epi16function
3583core::core_arch::x86::avx512bw_mm256_loadu_epi8function
3584core::core_arch::x86::avx512bw_mm256_mask_cvtepi16_storeu_epi8function
3585core::core_arch::x86::avx512bw_mm256_mask_cvtsepi16_storeu_epi8function
3586core::core_arch::x86::avx512bw_mm256_mask_cvtusepi16_storeu_epi8function
3587core::core_arch::x86::avx512bw_mm256_mask_loadu_epi16function
3588core::core_arch::x86::avx512bw_mm256_mask_loadu_epi8function
3589core::core_arch::x86::avx512bw_mm256_mask_storeu_epi16function
3590core::core_arch::x86::avx512bw_mm256_mask_storeu_epi8function
3591core::core_arch::x86::avx512bw_mm256_maskz_loadu_epi16function
3592core::core_arch::x86::avx512bw_mm256_maskz_loadu_epi8function
3593core::core_arch::x86::avx512bw_mm256_storeu_epi16function
3594core::core_arch::x86::avx512bw_mm256_storeu_epi8function
3595core::core_arch::x86::avx512bw_mm512_loadu_epi16function
3596core::core_arch::x86::avx512bw_mm512_loadu_epi8function
3597core::core_arch::x86::avx512bw_mm512_mask_cvtepi16_storeu_epi8function
3598core::core_arch::x86::avx512bw_mm512_mask_cvtsepi16_storeu_epi8function
3599core::core_arch::x86::avx512bw_mm512_mask_cvtusepi16_storeu_epi8function
3600core::core_arch::x86::avx512bw_mm512_mask_loadu_epi16function
3601core::core_arch::x86::avx512bw_mm512_mask_loadu_epi8function
3602core::core_arch::x86::avx512bw_mm512_mask_storeu_epi16function
3603core::core_arch::x86::avx512bw_mm512_mask_storeu_epi8function
3604core::core_arch::x86::avx512bw_mm512_maskz_loadu_epi16function
3605core::core_arch::x86::avx512bw_mm512_maskz_loadu_epi8function
3606core::core_arch::x86::avx512bw_mm512_storeu_epi16function
3607core::core_arch::x86::avx512bw_mm512_storeu_epi8function
3608core::core_arch::x86::avx512bw_mm_loadu_epi16function
3609core::core_arch::x86::avx512bw_mm_loadu_epi8function
3610core::core_arch::x86::avx512bw_mm_mask_cvtepi16_storeu_epi8function
3611core::core_arch::x86::avx512bw_mm_mask_cvtsepi16_storeu_epi8function
3612core::core_arch::x86::avx512bw_mm_mask_cvtusepi16_storeu_epi8function
3613core::core_arch::x86::avx512bw_mm_mask_loadu_epi16function
3614core::core_arch::x86::avx512bw_mm_mask_loadu_epi8function
3615core::core_arch::x86::avx512bw_mm_mask_storeu_epi16function
3616core::core_arch::x86::avx512bw_mm_mask_storeu_epi8function
3617core::core_arch::x86::avx512bw_mm_maskz_loadu_epi16function
3618core::core_arch::x86::avx512bw_mm_maskz_loadu_epi8function
3619core::core_arch::x86::avx512bw_mm_storeu_epi16function
3620core::core_arch::x86::avx512bw_mm_storeu_epi8function
3621core::core_arch::x86::avx512bw_store_mask32function
3622core::core_arch::x86::avx512bw_store_mask64function
3623core::core_arch::x86::avx512dq_kortest_mask8_u8function
3624core::core_arch::x86::avx512dq_ktest_mask16_u8function
3625core::core_arch::x86::avx512dq_ktest_mask8_u8function
3626core::core_arch::x86::avx512dq_load_mask8function
3627core::core_arch::x86::avx512dq_store_mask8function
3628core::core_arch::x86::avx512f_kortest_mask16_u8function
3629core::core_arch::x86::avx512f_load_mask16function
3630core::core_arch::x86::avx512f_mm256_i32scatter_epi32function
3631core::core_arch::x86::avx512f_mm256_i32scatter_epi64function
3632core::core_arch::x86::avx512f_mm256_i32scatter_pdfunction
3633core::core_arch::x86::avx512f_mm256_i32scatter_psfunction
3634core::core_arch::x86::avx512f_mm256_i64scatter_epi32function
3635core::core_arch::x86::avx512f_mm256_i64scatter_epi64function
3636core::core_arch::x86::avx512f_mm256_i64scatter_pdfunction
3637core::core_arch::x86::avx512f_mm256_i64scatter_psfunction
3638core::core_arch::x86::avx512f_mm256_load_epi32function
3639core::core_arch::x86::avx512f_mm256_load_epi64function
3640core::core_arch::x86::avx512f_mm256_loadu_epi32function
3641core::core_arch::x86::avx512f_mm256_loadu_epi64function
3642core::core_arch::x86::avx512f_mm256_mask_compressstoreu_epi32function
3643core::core_arch::x86::avx512f_mm256_mask_compressstoreu_epi64function
3644core::core_arch::x86::avx512f_mm256_mask_compressstoreu_pdfunction
3645core::core_arch::x86::avx512f_mm256_mask_compressstoreu_psfunction
3646core::core_arch::x86::avx512f_mm256_mask_cvtepi32_storeu_epi16function
3647core::core_arch::x86::avx512f_mm256_mask_cvtepi32_storeu_epi8function
3648core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi16function
3649core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi32function
3650core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi8function
3651core::core_arch::x86::avx512f_mm256_mask_cvtsepi32_storeu_epi16function
3652core::core_arch::x86::avx512f_mm256_mask_cvtsepi32_storeu_epi8function
3653core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi16function
3654core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi32function
3655core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi8function
3656core::core_arch::x86::avx512f_mm256_mask_cvtusepi32_storeu_epi16function
3657core::core_arch::x86::avx512f_mm256_mask_cvtusepi32_storeu_epi8function
3658core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi16function
3659core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi32function
3660core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi8function
3661core::core_arch::x86::avx512f_mm256_mask_expandloadu_epi32function
3662core::core_arch::x86::avx512f_mm256_mask_expandloadu_epi64function
3663core::core_arch::x86::avx512f_mm256_mask_expandloadu_pdfunction
3664core::core_arch::x86::avx512f_mm256_mask_expandloadu_psfunction
3665core::core_arch::x86::avx512f_mm256_mask_i32scatter_epi32function
3666core::core_arch::x86::avx512f_mm256_mask_i32scatter_epi64function
3667core::core_arch::x86::avx512f_mm256_mask_i32scatter_pdfunction
3668core::core_arch::x86::avx512f_mm256_mask_i32scatter_psfunction
3669core::core_arch::x86::avx512f_mm256_mask_i64scatter_epi32function
3670core::core_arch::x86::avx512f_mm256_mask_i64scatter_epi64function
3671core::core_arch::x86::avx512f_mm256_mask_i64scatter_pdfunction
3672core::core_arch::x86::avx512f_mm256_mask_i64scatter_psfunction
3673core::core_arch::x86::avx512f_mm256_mask_load_epi32function
3674core::core_arch::x86::avx512f_mm256_mask_load_epi64function
3675core::core_arch::x86::avx512f_mm256_mask_load_pdfunction
3676core::core_arch::x86::avx512f_mm256_mask_load_psfunction
3677core::core_arch::x86::avx512f_mm256_mask_loadu_epi32function
3678core::core_arch::x86::avx512f_mm256_mask_loadu_epi64function
3679core::core_arch::x86::avx512f_mm256_mask_loadu_pdfunction
3680core::core_arch::x86::avx512f_mm256_mask_loadu_psfunction
3681core::core_arch::x86::avx512f_mm256_mask_store_epi32function
3682core::core_arch::x86::avx512f_mm256_mask_store_epi64function
3683core::core_arch::x86::avx512f_mm256_mask_store_pdfunction
3684core::core_arch::x86::avx512f_mm256_mask_store_psfunction
3685core::core_arch::x86::avx512f_mm256_mask_storeu_epi32function
3686core::core_arch::x86::avx512f_mm256_mask_storeu_epi64function
3687core::core_arch::x86::avx512f_mm256_mask_storeu_pdfunction
3688core::core_arch::x86::avx512f_mm256_mask_storeu_psfunction
3689core::core_arch::x86::avx512f_mm256_maskz_expandloadu_epi32function
3690core::core_arch::x86::avx512f_mm256_maskz_expandloadu_epi64function
3691core::core_arch::x86::avx512f_mm256_maskz_expandloadu_pdfunction
3692core::core_arch::x86::avx512f_mm256_maskz_expandloadu_psfunction
3693core::core_arch::x86::avx512f_mm256_maskz_load_epi32function
3694core::core_arch::x86::avx512f_mm256_maskz_load_epi64function
3695core::core_arch::x86::avx512f_mm256_maskz_load_pdfunction
3696core::core_arch::x86::avx512f_mm256_maskz_load_psfunction
3697core::core_arch::x86::avx512f_mm256_maskz_loadu_epi32function
3698core::core_arch::x86::avx512f_mm256_maskz_loadu_epi64function
3699core::core_arch::x86::avx512f_mm256_maskz_loadu_pdfunction
3700core::core_arch::x86::avx512f_mm256_maskz_loadu_psfunction
3701core::core_arch::x86::avx512f_mm256_mmask_i32gather_epi32function
3702core::core_arch::x86::avx512f_mm256_mmask_i32gather_epi64function
3703core::core_arch::x86::avx512f_mm256_mmask_i32gather_pdfunction
3704core::core_arch::x86::avx512f_mm256_mmask_i32gather_psfunction
3705core::core_arch::x86::avx512f_mm256_mmask_i64gather_epi32function
3706core::core_arch::x86::avx512f_mm256_mmask_i64gather_epi64function
3707core::core_arch::x86::avx512f_mm256_mmask_i64gather_pdfunction
3708core::core_arch::x86::avx512f_mm256_mmask_i64gather_psfunction
3709core::core_arch::x86::avx512f_mm256_store_epi32function
3710core::core_arch::x86::avx512f_mm256_store_epi64function
3711core::core_arch::x86::avx512f_mm256_storeu_epi32function
3712core::core_arch::x86::avx512f_mm256_storeu_epi64function
3713core::core_arch::x86::avx512f_mm512_i32gather_epi32function
3714core::core_arch::x86::avx512f_mm512_i32gather_epi64function
3715core::core_arch::x86::avx512f_mm512_i32gather_pdfunction
3716core::core_arch::x86::avx512f_mm512_i32gather_psfunction
3717core::core_arch::x86::avx512f_mm512_i32logather_epi64function
3718core::core_arch::x86::avx512f_mm512_i32logather_pdfunction
3719core::core_arch::x86::avx512f_mm512_i32loscatter_epi64function
3720core::core_arch::x86::avx512f_mm512_i32loscatter_pdfunction
3721core::core_arch::x86::avx512f_mm512_i32scatter_epi32function
3722core::core_arch::x86::avx512f_mm512_i32scatter_epi64function
3723core::core_arch::x86::avx512f_mm512_i32scatter_pdfunction
3724core::core_arch::x86::avx512f_mm512_i32scatter_psfunction
3725core::core_arch::x86::avx512f_mm512_i64gather_epi32function
3726core::core_arch::x86::avx512f_mm512_i64gather_epi64function
3727core::core_arch::x86::avx512f_mm512_i64gather_pdfunction
3728core::core_arch::x86::avx512f_mm512_i64gather_psfunction
3729core::core_arch::x86::avx512f_mm512_i64scatter_epi32function
3730core::core_arch::x86::avx512f_mm512_i64scatter_epi64function
3731core::core_arch::x86::avx512f_mm512_i64scatter_pdfunction
3732core::core_arch::x86::avx512f_mm512_i64scatter_psfunction
3733core::core_arch::x86::avx512f_mm512_load_epi32function
3734core::core_arch::x86::avx512f_mm512_load_epi64function
3735core::core_arch::x86::avx512f_mm512_load_pdfunction
3736core::core_arch::x86::avx512f_mm512_load_psfunction
3737core::core_arch::x86::avx512f_mm512_load_si512function
3738core::core_arch::x86::avx512f_mm512_loadu_epi32function
3739core::core_arch::x86::avx512f_mm512_loadu_epi64function
3740core::core_arch::x86::avx512f_mm512_loadu_pdfunction
3741core::core_arch::x86::avx512f_mm512_loadu_psfunction
3742core::core_arch::x86::avx512f_mm512_loadu_si512function
3743core::core_arch::x86::avx512f_mm512_mask_compressstoreu_epi32function
3744core::core_arch::x86::avx512f_mm512_mask_compressstoreu_epi64function
3745core::core_arch::x86::avx512f_mm512_mask_compressstoreu_pdfunction
3746core::core_arch::x86::avx512f_mm512_mask_compressstoreu_psfunction
3747core::core_arch::x86::avx512f_mm512_mask_cvtepi32_storeu_epi16function
3748core::core_arch::x86::avx512f_mm512_mask_cvtepi32_storeu_epi8function
3749core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi16function
3750core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi32function
3751core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi8function
3752core::core_arch::x86::avx512f_mm512_mask_cvtsepi32_storeu_epi16function
3753core::core_arch::x86::avx512f_mm512_mask_cvtsepi32_storeu_epi8function
3754core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi16function
3755core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi32function
3756core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi8function
3757core::core_arch::x86::avx512f_mm512_mask_cvtusepi32_storeu_epi16function
3758core::core_arch::x86::avx512f_mm512_mask_cvtusepi32_storeu_epi8function
3759core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi16function
3760core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi32function
3761core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi8function
3762core::core_arch::x86::avx512f_mm512_mask_expandloadu_epi32function
3763core::core_arch::x86::avx512f_mm512_mask_expandloadu_epi64function
3764core::core_arch::x86::avx512f_mm512_mask_expandloadu_pdfunction
3765core::core_arch::x86::avx512f_mm512_mask_expandloadu_psfunction
3766core::core_arch::x86::avx512f_mm512_mask_i32gather_epi32function
3767core::core_arch::x86::avx512f_mm512_mask_i32gather_epi64function
3768core::core_arch::x86::avx512f_mm512_mask_i32gather_pdfunction
3769core::core_arch::x86::avx512f_mm512_mask_i32gather_psfunction
3770core::core_arch::x86::avx512f_mm512_mask_i32logather_epi64function
3771core::core_arch::x86::avx512f_mm512_mask_i32logather_pdfunction
3772core::core_arch::x86::avx512f_mm512_mask_i32loscatter_epi64function
3773core::core_arch::x86::avx512f_mm512_mask_i32loscatter_pdfunction
3774core::core_arch::x86::avx512f_mm512_mask_i32scatter_epi32function
3775core::core_arch::x86::avx512f_mm512_mask_i32scatter_epi64function
3776core::core_arch::x86::avx512f_mm512_mask_i32scatter_pdfunction
3777core::core_arch::x86::avx512f_mm512_mask_i32scatter_psfunction
3778core::core_arch::x86::avx512f_mm512_mask_i64gather_epi32function
3779core::core_arch::x86::avx512f_mm512_mask_i64gather_epi64function
3780core::core_arch::x86::avx512f_mm512_mask_i64gather_pdfunction
3781core::core_arch::x86::avx512f_mm512_mask_i64gather_psfunction
3782core::core_arch::x86::avx512f_mm512_mask_i64scatter_epi32function
3783core::core_arch::x86::avx512f_mm512_mask_i64scatter_epi64function
3784core::core_arch::x86::avx512f_mm512_mask_i64scatter_pdfunction
3785core::core_arch::x86::avx512f_mm512_mask_i64scatter_psfunction
3786core::core_arch::x86::avx512f_mm512_mask_load_epi32function
3787core::core_arch::x86::avx512f_mm512_mask_load_epi64function
3788core::core_arch::x86::avx512f_mm512_mask_load_pdfunction
3789core::core_arch::x86::avx512f_mm512_mask_load_psfunction
3790core::core_arch::x86::avx512f_mm512_mask_loadu_epi32function
3791core::core_arch::x86::avx512f_mm512_mask_loadu_epi64function
3792core::core_arch::x86::avx512f_mm512_mask_loadu_pdfunction
3793core::core_arch::x86::avx512f_mm512_mask_loadu_psfunction
3794core::core_arch::x86::avx512f_mm512_mask_store_epi32function
3795core::core_arch::x86::avx512f_mm512_mask_store_epi64function
3796core::core_arch::x86::avx512f_mm512_mask_store_pdfunction
3797core::core_arch::x86::avx512f_mm512_mask_store_psfunction
3798core::core_arch::x86::avx512f_mm512_mask_storeu_epi32function
3799core::core_arch::x86::avx512f_mm512_mask_storeu_epi64function
3800core::core_arch::x86::avx512f_mm512_mask_storeu_pdfunction
3801core::core_arch::x86::avx512f_mm512_mask_storeu_psfunction
3802core::core_arch::x86::avx512f_mm512_maskz_expandloadu_epi32function
3803core::core_arch::x86::avx512f_mm512_maskz_expandloadu_epi64function
3804core::core_arch::x86::avx512f_mm512_maskz_expandloadu_pdfunction
3805core::core_arch::x86::avx512f_mm512_maskz_expandloadu_psfunction
3806core::core_arch::x86::avx512f_mm512_maskz_load_epi32function
3807core::core_arch::x86::avx512f_mm512_maskz_load_epi64function
3808core::core_arch::x86::avx512f_mm512_maskz_load_pdfunction
3809core::core_arch::x86::avx512f_mm512_maskz_load_psfunction
3810core::core_arch::x86::avx512f_mm512_maskz_loadu_epi32function
3811core::core_arch::x86::avx512f_mm512_maskz_loadu_epi64function
3812core::core_arch::x86::avx512f_mm512_maskz_loadu_pdfunction
3813core::core_arch::x86::avx512f_mm512_maskz_loadu_psfunction
3814core::core_arch::x86::avx512f_mm512_store_epi32function
3815core::core_arch::x86::avx512f_mm512_store_epi64function
3816core::core_arch::x86::avx512f_mm512_store_pdfunction
3817core::core_arch::x86::avx512f_mm512_store_psfunction
3818core::core_arch::x86::avx512f_mm512_store_si512function
3819core::core_arch::x86::avx512f_mm512_storeu_epi32function
3820core::core_arch::x86::avx512f_mm512_storeu_epi64function
3821core::core_arch::x86::avx512f_mm512_storeu_pdfunction
3822core::core_arch::x86::avx512f_mm512_storeu_psfunction
3823core::core_arch::x86::avx512f_mm512_storeu_si512function
3824core::core_arch::x86::avx512f_mm512_stream_load_si512function
3825core::core_arch::x86::avx512f_mm512_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3826core::core_arch::x86::avx512f_mm512_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3827core::core_arch::x86::avx512f_mm512_stream_si512functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
3828core::core_arch::x86::avx512f_mm_i32scatter_epi32function
3829core::core_arch::x86::avx512f_mm_i32scatter_epi64function
3830core::core_arch::x86::avx512f_mm_i32scatter_pdfunction
3831core::core_arch::x86::avx512f_mm_i32scatter_psfunction
3832core::core_arch::x86::avx512f_mm_i64scatter_epi32function
3833core::core_arch::x86::avx512f_mm_i64scatter_epi64function
3834core::core_arch::x86::avx512f_mm_i64scatter_pdfunction
3835core::core_arch::x86::avx512f_mm_i64scatter_psfunction
3836core::core_arch::x86::avx512f_mm_load_epi32function
3837core::core_arch::x86::avx512f_mm_load_epi64function
3838core::core_arch::x86::avx512f_mm_loadu_epi32function
3839core::core_arch::x86::avx512f_mm_loadu_epi64function
3840core::core_arch::x86::avx512f_mm_mask_compressstoreu_epi32function
3841core::core_arch::x86::avx512f_mm_mask_compressstoreu_epi64function
3842core::core_arch::x86::avx512f_mm_mask_compressstoreu_pdfunction
3843core::core_arch::x86::avx512f_mm_mask_compressstoreu_psfunction
3844core::core_arch::x86::avx512f_mm_mask_cvtepi32_storeu_epi16function
3845core::core_arch::x86::avx512f_mm_mask_cvtepi32_storeu_epi8function
3846core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi16function
3847core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi32function
3848core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi8function
3849core::core_arch::x86::avx512f_mm_mask_cvtsepi32_storeu_epi16function
3850core::core_arch::x86::avx512f_mm_mask_cvtsepi32_storeu_epi8function
3851core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi16function
3852core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi32function
3853core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi8function
3854core::core_arch::x86::avx512f_mm_mask_cvtusepi32_storeu_epi16function
3855core::core_arch::x86::avx512f_mm_mask_cvtusepi32_storeu_epi8function
3856core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi16function
3857core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi32function
3858core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi8function
3859core::core_arch::x86::avx512f_mm_mask_expandloadu_epi32function
3860core::core_arch::x86::avx512f_mm_mask_expandloadu_epi64function
3861core::core_arch::x86::avx512f_mm_mask_expandloadu_pdfunction
3862core::core_arch::x86::avx512f_mm_mask_expandloadu_psfunction
3863core::core_arch::x86::avx512f_mm_mask_i32scatter_epi32function
3864core::core_arch::x86::avx512f_mm_mask_i32scatter_epi64function
3865core::core_arch::x86::avx512f_mm_mask_i32scatter_pdfunction
3866core::core_arch::x86::avx512f_mm_mask_i32scatter_psfunction
3867core::core_arch::x86::avx512f_mm_mask_i64scatter_epi32function
3868core::core_arch::x86::avx512f_mm_mask_i64scatter_epi64function
3869core::core_arch::x86::avx512f_mm_mask_i64scatter_pdfunction
3870core::core_arch::x86::avx512f_mm_mask_i64scatter_psfunction
3871core::core_arch::x86::avx512f_mm_mask_load_epi32function
3872core::core_arch::x86::avx512f_mm_mask_load_epi64function
3873core::core_arch::x86::avx512f_mm_mask_load_pdfunction
3874core::core_arch::x86::avx512f_mm_mask_load_psfunction
3875core::core_arch::x86::avx512f_mm_mask_load_sdfunction
3876core::core_arch::x86::avx512f_mm_mask_load_ssfunction
3877core::core_arch::x86::avx512f_mm_mask_loadu_epi32function
3878core::core_arch::x86::avx512f_mm_mask_loadu_epi64function
3879core::core_arch::x86::avx512f_mm_mask_loadu_pdfunction
3880core::core_arch::x86::avx512f_mm_mask_loadu_psfunction
3881core::core_arch::x86::avx512f_mm_mask_store_epi32function
3882core::core_arch::x86::avx512f_mm_mask_store_epi64function
3883core::core_arch::x86::avx512f_mm_mask_store_pdfunction
3884core::core_arch::x86::avx512f_mm_mask_store_psfunction
3885core::core_arch::x86::avx512f_mm_mask_store_sdfunction
3886core::core_arch::x86::avx512f_mm_mask_store_ssfunction
3887core::core_arch::x86::avx512f_mm_mask_storeu_epi32function
3888core::core_arch::x86::avx512f_mm_mask_storeu_epi64function
3889core::core_arch::x86::avx512f_mm_mask_storeu_pdfunction
3890core::core_arch::x86::avx512f_mm_mask_storeu_psfunction
3891core::core_arch::x86::avx512f_mm_maskz_expandloadu_epi32function
3892core::core_arch::x86::avx512f_mm_maskz_expandloadu_epi64function
3893core::core_arch::x86::avx512f_mm_maskz_expandloadu_pdfunction
3894core::core_arch::x86::avx512f_mm_maskz_expandloadu_psfunction
3895core::core_arch::x86::avx512f_mm_maskz_load_epi32function
3896core::core_arch::x86::avx512f_mm_maskz_load_epi64function
3897core::core_arch::x86::avx512f_mm_maskz_load_pdfunction
3898core::core_arch::x86::avx512f_mm_maskz_load_psfunction
3899core::core_arch::x86::avx512f_mm_maskz_load_sdfunction
3900core::core_arch::x86::avx512f_mm_maskz_load_ssfunction
3901core::core_arch::x86::avx512f_mm_maskz_loadu_epi32function
3902core::core_arch::x86::avx512f_mm_maskz_loadu_epi64function
3903core::core_arch::x86::avx512f_mm_maskz_loadu_pdfunction
3904core::core_arch::x86::avx512f_mm_maskz_loadu_psfunction
3905core::core_arch::x86::avx512f_mm_mmask_i32gather_epi32function
3906core::core_arch::x86::avx512f_mm_mmask_i32gather_epi64function
3907core::core_arch::x86::avx512f_mm_mmask_i32gather_pdfunction
3908core::core_arch::x86::avx512f_mm_mmask_i32gather_psfunction
3909core::core_arch::x86::avx512f_mm_mmask_i64gather_epi32function
3910core::core_arch::x86::avx512f_mm_mmask_i64gather_epi64function
3911core::core_arch::x86::avx512f_mm_mmask_i64gather_pdfunction
3912core::core_arch::x86::avx512f_mm_mmask_i64gather_psfunction
3913core::core_arch::x86::avx512f_mm_store_epi32function
3914core::core_arch::x86::avx512f_mm_store_epi64function
3915core::core_arch::x86::avx512f_mm_storeu_epi32function
3916core::core_arch::x86::avx512f_mm_storeu_epi64function
3917core::core_arch::x86::avx512f_store_mask16function
3918core::core_arch::x86::avx512fp16_mm256_load_phfunction
3919core::core_arch::x86::avx512fp16_mm256_loadu_phfunction
3920core::core_arch::x86::avx512fp16_mm256_store_phfunction
3921core::core_arch::x86::avx512fp16_mm256_storeu_phfunction
3922core::core_arch::x86::avx512fp16_mm512_load_phfunction
3923core::core_arch::x86::avx512fp16_mm512_loadu_phfunction
3924core::core_arch::x86::avx512fp16_mm512_store_phfunction
3925core::core_arch::x86::avx512fp16_mm512_storeu_phfunction
3926core::core_arch::x86::avx512fp16_mm_load_phfunction
3927core::core_arch::x86::avx512fp16_mm_load_shfunction
3928core::core_arch::x86::avx512fp16_mm_loadu_phfunction
3929core::core_arch::x86::avx512fp16_mm_mask_load_shfunction
3930core::core_arch::x86::avx512fp16_mm_mask_store_shfunction
3931core::core_arch::x86::avx512fp16_mm_maskz_load_shfunction
3932core::core_arch::x86::avx512fp16_mm_store_phfunction
3933core::core_arch::x86::avx512fp16_mm_store_shfunction
3934core::core_arch::x86::avx512fp16_mm_storeu_phfunction
3935core::core_arch::x86::avx512vbmi2_mm256_mask_compressstoreu_epi16function
3936core::core_arch::x86::avx512vbmi2_mm256_mask_compressstoreu_epi8function
3937core::core_arch::x86::avx512vbmi2_mm256_mask_expandloadu_epi16function
3938core::core_arch::x86::avx512vbmi2_mm256_mask_expandloadu_epi8function
3939core::core_arch::x86::avx512vbmi2_mm256_maskz_expandloadu_epi16function
3940core::core_arch::x86::avx512vbmi2_mm256_maskz_expandloadu_epi8function
3941core::core_arch::x86::avx512vbmi2_mm512_mask_compressstoreu_epi16function
3942core::core_arch::x86::avx512vbmi2_mm512_mask_compressstoreu_epi8function
3943core::core_arch::x86::avx512vbmi2_mm512_mask_expandloadu_epi16function
3944core::core_arch::x86::avx512vbmi2_mm512_mask_expandloadu_epi8function
3945core::core_arch::x86::avx512vbmi2_mm512_maskz_expandloadu_epi16function
3946core::core_arch::x86::avx512vbmi2_mm512_maskz_expandloadu_epi8function
3947core::core_arch::x86::avx512vbmi2_mm_mask_compressstoreu_epi16function
3948core::core_arch::x86::avx512vbmi2_mm_mask_compressstoreu_epi8function
3949core::core_arch::x86::avx512vbmi2_mm_mask_expandloadu_epi16function
3950core::core_arch::x86::avx512vbmi2_mm_mask_expandloadu_epi8function
3951core::core_arch::x86::avx512vbmi2_mm_maskz_expandloadu_epi16function
3952core::core_arch::x86::avx512vbmi2_mm_maskz_expandloadu_epi8function
3953core::core_arch::x86::avxneconvert_mm256_bcstnebf16_psfunction
3954core::core_arch::x86::avxneconvert_mm256_bcstnesh_psfunction
3955core::core_arch::x86::avxneconvert_mm256_cvtneebf16_psfunction
3956core::core_arch::x86::avxneconvert_mm256_cvtneeph_psfunction
3957core::core_arch::x86::avxneconvert_mm256_cvtneobf16_psfunction
3958core::core_arch::x86::avxneconvert_mm256_cvtneoph_psfunction
3959core::core_arch::x86::avxneconvert_mm_bcstnebf16_psfunction
3960core::core_arch::x86::avxneconvert_mm_bcstnesh_psfunction
3961core::core_arch::x86::avxneconvert_mm_cvtneebf16_psfunction
3962core::core_arch::x86::avxneconvert_mm_cvtneeph_psfunction
3963core::core_arch::x86::avxneconvert_mm_cvtneobf16_psfunction
3964core::core_arch::x86::avxneconvert_mm_cvtneoph_psfunction
3965core::core_arch::x86::bt_bittestfunction
3966core::core_arch::x86::bt_bittestandcomplementfunction
3967core::core_arch::x86::bt_bittestandresetfunction
3968core::core_arch::x86::bt_bittestandsetfunction
3969core::core_arch::x86::fxsr_fxrstorfunction
3970core::core_arch::x86::fxsr_fxsavefunction
3971core::core_arch::x86::kl_mm_aesdec128kl_u8function
3972core::core_arch::x86::kl_mm_aesdec256kl_u8function
3973core::core_arch::x86::kl_mm_aesdecwide128kl_u8function
3974core::core_arch::x86::kl_mm_aesdecwide256kl_u8function
3975core::core_arch::x86::kl_mm_aesenc128kl_u8function
3976core::core_arch::x86::kl_mm_aesenc256kl_u8function
3977core::core_arch::x86::kl_mm_aesencwide128kl_u8function
3978core::core_arch::x86::kl_mm_aesencwide256kl_u8function
3979core::core_arch::x86::kl_mm_encodekey128_u32function
3980core::core_arch::x86::kl_mm_encodekey256_u32function
3981core::core_arch::x86::kl_mm_loadiwkeyfunction
3982core::core_arch::x86::rdtsc__rdtscpfunction
3983core::core_arch::x86::rdtsc_rdtscfunction
3984core::core_arch::x86::rtm_xabortfunction
3985core::core_arch::x86::rtm_xbeginfunction
3986core::core_arch::x86::rtm_xendfunction
3987core::core_arch::x86::rtm_xtestfunction
3988core::core_arch::x86::sse_MM_GET_EXCEPTION_MASKfunction
3989core::core_arch::x86::sse_MM_GET_EXCEPTION_STATEfunction
3990core::core_arch::x86::sse_MM_GET_FLUSH_ZERO_MODEfunction
3991core::core_arch::x86::sse_MM_GET_ROUNDING_MODEfunction
3992core::core_arch::x86::sse_MM_SET_EXCEPTION_MASKfunction
3993core::core_arch::x86::sse_MM_SET_EXCEPTION_STATEfunction
3994core::core_arch::x86::sse_MM_SET_FLUSH_ZERO_MODEfunction
3995core::core_arch::x86::sse_MM_SET_ROUNDING_MODEfunction
3996core::core_arch::x86::sse_mm_getcsrfunction
3997core::core_arch::x86::sse_mm_load1_psfunction
3998core::core_arch::x86::sse_mm_load_psfunction
3999core::core_arch::x86::sse_mm_load_ps1function
4000core::core_arch::x86::sse_mm_load_ssfunction
4001core::core_arch::x86::sse_mm_loadr_psfunction
4002core::core_arch::x86::sse_mm_loadu_psfunction
4003core::core_arch::x86::sse_mm_setcsrfunction
4004core::core_arch::x86::sse_mm_store1_psfunction
4005core::core_arch::x86::sse_mm_store_psfunction
4006core::core_arch::x86::sse_mm_store_ps1function
4007core::core_arch::x86::sse_mm_store_ssfunction
4008core::core_arch::x86::sse_mm_storer_psfunction
4009core::core_arch::x86::sse_mm_storeu_psfunction
4010core::core_arch::x86::sse_mm_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4011core::core_arch::x86::sse2_mm_clflushfunction
4012core::core_arch::x86::sse2_mm_load1_pdfunction
4013core::core_arch::x86::sse2_mm_load_pdfunction
4014core::core_arch::x86::sse2_mm_load_pd1function
4015core::core_arch::x86::sse2_mm_load_sdfunction
4016core::core_arch::x86::sse2_mm_load_si128function
4017core::core_arch::x86::sse2_mm_loadh_pdfunction
4018core::core_arch::x86::sse2_mm_loadl_epi64function
4019core::core_arch::x86::sse2_mm_loadl_pdfunction
4020core::core_arch::x86::sse2_mm_loadr_pdfunction
4021core::core_arch::x86::sse2_mm_loadu_pdfunction
4022core::core_arch::x86::sse2_mm_loadu_si128function
4023core::core_arch::x86::sse2_mm_loadu_si16function
4024core::core_arch::x86::sse2_mm_loadu_si32function
4025core::core_arch::x86::sse2_mm_loadu_si64function
4026core::core_arch::x86::sse2_mm_maskmoveu_si128functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4027core::core_arch::x86::sse2_mm_store1_pdfunction
4028core::core_arch::x86::sse2_mm_store_pdfunction
4029core::core_arch::x86::sse2_mm_store_pd1function
4030core::core_arch::x86::sse2_mm_store_sdfunction
4031core::core_arch::x86::sse2_mm_store_si128function
4032core::core_arch::x86::sse2_mm_storeh_pdfunction
4033core::core_arch::x86::sse2_mm_storel_epi64function
4034core::core_arch::x86::sse2_mm_storel_pdfunction
4035core::core_arch::x86::sse2_mm_storer_pdfunction
4036core::core_arch::x86::sse2_mm_storeu_pdfunction
4037core::core_arch::x86::sse2_mm_storeu_si128function
4038core::core_arch::x86::sse2_mm_storeu_si16function
4039core::core_arch::x86::sse2_mm_storeu_si32function
4040core::core_arch::x86::sse2_mm_storeu_si64function
4041core::core_arch::x86::sse2_mm_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4042core::core_arch::x86::sse2_mm_stream_si128functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4043core::core_arch::x86::sse2_mm_stream_si32functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4044core::core_arch::x86::sse3_mm_lddqu_si128function
4045core::core_arch::x86::sse3_mm_loaddup_pdfunction
4046core::core_arch::x86::sse41_mm_stream_load_si128function
4047core::core_arch::x86::sse4a_mm_stream_sdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4048core::core_arch::x86::sse4a_mm_stream_ssfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4049core::core_arch::x86::xsave_xgetbvfunction
4050core::core_arch::x86::xsave_xrstorfunction
4051core::core_arch::x86::xsave_xrstorsfunction
4052core::core_arch::x86::xsave_xsavefunction
4053core::core_arch::x86::xsave_xsavecfunction
4054core::core_arch::x86::xsave_xsaveoptfunction
4055core::core_arch::x86::xsave_xsavesfunction
4056core::core_arch::x86::xsave_xsetbvfunction
4057core::core_arch::x86_64::amx_tile_cmmimfp16psfunction
4058core::core_arch::x86_64::amx_tile_cmmrlfp16psfunction
4059core::core_arch::x86_64::amx_tile_cvtrowd2psfunction
4060core::core_arch::x86_64::amx_tile_cvtrowd2psifunction
4061core::core_arch::x86_64::amx_tile_cvtrowps2phhfunction
4062core::core_arch::x86_64::amx_tile_cvtrowps2phhifunction
4063core::core_arch::x86_64::amx_tile_cvtrowps2phlfunction
4064core::core_arch::x86_64::amx_tile_cvtrowps2phlifunction
4065core::core_arch::x86_64::amx_tile_dpbf16psfunction
4066core::core_arch::x86_64::amx_tile_dpbf8psfunction
4067core::core_arch::x86_64::amx_tile_dpbhf8psfunction
4068core::core_arch::x86_64::amx_tile_dpbssdfunction
4069core::core_arch::x86_64::amx_tile_dpbsudfunction
4070core::core_arch::x86_64::amx_tile_dpbusdfunction
4071core::core_arch::x86_64::amx_tile_dpbuudfunction
4072core::core_arch::x86_64::amx_tile_dpfp16psfunction
4073core::core_arch::x86_64::amx_tile_dphbf8psfunction
4074core::core_arch::x86_64::amx_tile_dphf8psfunction
4075core::core_arch::x86_64::amx_tile_loadconfigfunction
4076core::core_arch::x86_64::amx_tile_loaddfunction
4077core::core_arch::x86_64::amx_tile_loaddrsfunction
4078core::core_arch::x86_64::amx_tile_mmultf32psfunction
4079core::core_arch::x86_64::amx_tile_movrowfunction
4080core::core_arch::x86_64::amx_tile_movrowifunction
4081core::core_arch::x86_64::amx_tile_releasefunction
4082core::core_arch::x86_64::amx_tile_storeconfigfunction
4083core::core_arch::x86_64::amx_tile_storedfunction
4084core::core_arch::x86_64::amx_tile_stream_loaddfunction
4085core::core_arch::x86_64::amx_tile_stream_loaddrsfunction
4086core::core_arch::x86_64::amx_tile_zerofunction
4087core::core_arch::x86_64::bt_bittest64function
4088core::core_arch::x86_64::bt_bittestandcomplement64function
4089core::core_arch::x86_64::bt_bittestandreset64function
4090core::core_arch::x86_64::bt_bittestandset64function
4091core::core_arch::x86_64::cmpxchg16bcmpxchg16bfunction
4092core::core_arch::x86_64::fxsr_fxrstor64function
4093core::core_arch::x86_64::fxsr_fxsave64function
4094core::core_arch::x86_64::movrs_movrs_i16function
4095core::core_arch::x86_64::movrs_movrs_i32function
4096core::core_arch::x86_64::movrs_movrs_i64function
4097core::core_arch::x86_64::movrs_movrs_i8function
4098core::core_arch::x86_64::sse2_mm_stream_si64functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
4099core::core_arch::x86_64::xsave_xrstor64function
4100core::core_arch::x86_64::xsave_xrstors64function
4101core::core_arch::x86_64::xsave_xsave64function
4102core::core_arch::x86_64::xsave_xsavec64function
4103core::core_arch::x86_64::xsave_xsaveopt64function
4104core::core_arch::x86_64::xsave_xsaves64function
4105core::core_simd::cast::sealedSealedtraitImplementing this trait asserts that the type is a valid vector element for the `simd_cast` or `simd_as` intrinsics.
4106core::core_simd::masksMaskElementtraitType must be a signed integer.
4107core::core_simd::masks::Maskfrom_simd_uncheckedfunctionAll elements must be either 0 or -1.
4108core::core_simd::masks::Maskset_uncheckedfunction`index` must be less than `self.len()`.
4109core::core_simd::masks::Masktest_uncheckedfunction`index` must be less than `self.len()`.
4110core::core_simd::vectorSimdElementtraitThis trait, when implemented, asserts the compiler can monomorphize `#[repr(simd)]` structs with the marked type as an element. Strictly, it is valid to impl if the vector will not be miscompiled. Practically, it is user-unfriendly to impl it if the vector won't compile, even when no soundness guarantees are broken by allowing the user to try.
4111core::core_simd::vector::Simdgather_ptrfunctionEach read must satisfy the same conditions as [`core::ptr::read`].
4112core::core_simd::vector::Simdgather_select_ptrfunctionEnabled elements must satisfy the same conditions as [`core::ptr::read`].
4113core::core_simd::vector::Simdgather_select_uncheckedfunctionCalling this function with an `enable`d out-of-bounds index is *[undefined behavior]* even if the resulting value is not used.
4114core::core_simd::vector::Simdload_select_ptrfunctionEnabled `ptr` elements must be safe to read as if by `core::ptr::read`.
4115core::core_simd::vector::Simdload_select_uncheckedfunctionEnabled loads must not exceed the length of `slice`.
4116core::core_simd::vector::Simdscatter_ptrfunctionEach write must satisfy the same conditions as [`core::ptr::write`].
4117core::core_simd::vector::Simdscatter_select_ptrfunctionEnabled pointers must satisfy the same conditions as [`core::ptr::write`].
4118core::core_simd::vector::Simdscatter_select_uncheckedfunctionCalling this function with an enabled out-of-bounds index is *[undefined behavior]*, and may lead to memory corruption.
4119core::core_simd::vector::Simdstore_select_ptrfunctionMemory addresses for element are calculated [`pointer::wrapping_offset`] and each enabled element must satisfy the same conditions as [`core::ptr::write`].
4120core::core_simd::vector::Simdstore_select_uncheckedfunctionEvery enabled element must be in bounds for the `slice`.
4121core::f128to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4122core::f16to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4123core::f32to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4124core::f64to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4125core::ffiVaArgSafetraitThe standard library implements this trait for primitive types that are expected to have a variable argument application-binary interface (ABI) on all platforms. When C passes variable arguments, integers smaller than [`c_int`] and floats smaller than [`c_double`] are implicitly promoted to [`c_int`] and [`c_double`] respectively. Implementing this trait for types that are subject to this promotion rule is invalid. [`c_int`]: core::ffi::c_int [`c_double`]: core::ffi::c_double
4126core::ffi::c_str::CStrfrom_bytes_with_nul_uncheckedfunctionThe provided slice **must** be nul-terminated and not contain any interior nul bytes.
4127core::ffi::c_str::CStrfrom_ptrfunction* The memory pointed to by `ptr` must contain a valid nul terminator at the end of the string. * `ptr` must be [valid] for reads of bytes up to and including the nul terminator. This means in particular: * The entire memory range of this `CStr` must be contained within a single allocation! * `ptr` must be non-null even for a zero-length cstr. * The memory referenced by the returned `CStr` must not be mutated for the duration of lifetime `'a`. * The nul terminator must be within `isize::MAX` from `ptr` > **Note**: This operation is intended to be a 0-cost cast but it is > currently implemented with an up-front calculation of the length of > the string. This is not guaranteed to always be the case.
4128core::ffi::va_list::VaListargfunctionThis function is only sound to call when there is another argument to read, and that argument is a properly initialized value of the type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound.
4129core::fieldFieldtraitGiven a valid value of type `Self::Base`, there exists a valid value of type `Self::Type` at byte offset `OFFSET`
4130core::futureasync_drop_in_placefunctionThe pointer `_to_drop` must be valid for both reads and writes, not only for the duration of this function call, but also until the returned future has completed. See [ptr::drop_in_place] for additional safety concerns. [ptr::drop_in_place]: crate::ptr::drop_in_place()
4131core::hintassert_uncheckedfunction`cond` must be `true`. It is immediate UB to call this with `false`.
4132core::hintunreachable_uncheckedfunctionReaching this function is *Undefined Behavior*. As the compiler assumes that all forms of Undefined Behavior can never happen, it will eliminate all branches in the surrounding code that it can determine will invariably lead to a call to `unreachable_unchecked()`. If the assumptions embedded in using this function turn out to be wrong - that is, if the site which is calling `unreachable_unchecked()` is actually reachable at runtime - the compiler may have generated nonsensical machine instructions for this situation, including in seemingly unrelated code, causing difficult-to-debug problems. Use this function sparingly. Consider using the [`unreachable!`] macro, which may prevent some optimizations but will safely panic in case it is actually reached at runtime. Benchmark your code to find out if using `unreachable_unchecked()` comes with a performance benefit.
4133core::i128unchecked_addfunctionThis results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add
4134core::i128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4135core::i128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul
4136core::i128unchecked_negfunctionThis results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg
4137core::i128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl
4138core::i128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`.
4139core::i128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr
4140core::i128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`.
4141core::i128unchecked_subfunctionThis results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub
4142core::i16unchecked_addfunctionThis results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add
4143core::i16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4144core::i16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul
4145core::i16unchecked_negfunctionThis results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg
4146core::i16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl
4147core::i16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`.
4148core::i16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr
4149core::i16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`.
4150core::i16unchecked_subfunctionThis results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub
4151core::i32unchecked_addfunctionThis results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add
4152core::i32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4153core::i32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul
4154core::i32unchecked_negfunctionThis results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg
4155core::i32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl
4156core::i32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`.
4157core::i32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr
4158core::i32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`.
4159core::i32unchecked_subfunctionThis results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub
4160core::i64unchecked_addfunctionThis results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add
4161core::i64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4162core::i64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul
4163core::i64unchecked_negfunctionThis results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg
4164core::i64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl
4165core::i64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`.
4166core::i64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr
4167core::i64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`.
4168core::i64unchecked_subfunctionThis results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub
4169core::i8unchecked_addfunctionThis results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add
4170core::i8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4171core::i8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul
4172core::i8unchecked_negfunctionThis results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg
4173core::i8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl
4174core::i8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`.
4175core::i8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr
4176core::i8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`.
4177core::i8unchecked_subfunctionThis results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub
4178core::intrinsicsalign_of_valfunctionSee [`crate::mem::align_of_val_raw`] for safety conditions.
4179core::intrinsicsarith_offsetfunctionUnlike the `offset` intrinsic, this intrinsic does not restrict the resulting pointer to point into or at the end of an allocated object, and it wraps with two's complement arithmetic. The resulting value is not necessarily valid to be used to actually access memory. The stabilized version of this intrinsic is [`pointer::wrapping_offset`].
4180core::intrinsicsassumefunction
4181core::intrinsicsatomic_andfunction
4182core::intrinsicsatomic_cxchgfunction
4183core::intrinsicsatomic_cxchgweakfunction
4184core::intrinsicsatomic_fencefunction
4185core::intrinsicsatomic_loadfunction
4186core::intrinsicsatomic_maxfunction
4187core::intrinsicsatomic_minfunction
4188core::intrinsicsatomic_nandfunction
4189core::intrinsicsatomic_orfunction
4190core::intrinsicsatomic_singlethreadfencefunction
4191core::intrinsicsatomic_storefunction
4192core::intrinsicsatomic_umaxfunction
4193core::intrinsicsatomic_uminfunction
4194core::intrinsicsatomic_xaddfunction
4195core::intrinsicsatomic_xchgfunction
4196core::intrinsicsatomic_xorfunction
4197core::intrinsicsatomic_xsubfunction
4198core::intrinsicscatch_unwindfunction
4199core::intrinsicscompare_bytesfunction`left` and `right` must each be [valid] for reads of `bytes` bytes. Note that this applies to the whole range, not just until the first byte that differs. That allows optimizations that can read in large chunks. [valid]: crate::ptr#safety
4200core::intrinsicsconst_allocatefunction- The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked.
4201core::intrinsicsconst_deallocatefunction- The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked. - If the `ptr` is created in an another const, this intrinsic doesn't deallocate it. - If the `ptr` is pointing to a local variable, this intrinsic doesn't deallocate it.
4202core::intrinsicsconst_make_globalfunction
4203core::intrinsicscopyfunction
4204core::intrinsicscopy_nonoverlappingfunction
4205core::intrinsicsctlz_nonzerofunction
4206core::intrinsicscttz_nonzerofunction
4207core::intrinsicsdisjoint_bitorfunctionRequires that `(a & b) == 0`, or equivalently that `(a | b) == (a + b)`. Otherwise it's immediate UB.
4208core::intrinsicsexact_divfunction
4209core::intrinsicsfadd_fastfunction
4210core::intrinsicsfdiv_fastfunction
4211core::intrinsicsfloat_to_int_uncheckedfunction
4212core::intrinsicsfmul_fastfunction
4213core::intrinsicsfrem_fastfunction
4214core::intrinsicsfsub_fastfunction
4215core::intrinsicsnontemporal_storefunction
4216core::intrinsicsoffsetfunctionIf the computed offset is non-zero, then both the starting and resulting pointer must be either in bounds or at the end of an allocation. If either pointer is out of bounds or arithmetic overflow occurs then this operation is undefined behavior. The stabilized version of this intrinsic is [`pointer::offset`].
4217core::intrinsicsptr_offset_fromfunction
4218core::intrinsicsptr_offset_from_unsignedfunction
4219core::intrinsicsraw_eqfunctionIt's UB to call this if any of the *bytes* in `*a` or `*b` are uninitialized. Note that this is a stricter criterion than just the *values* being fully-initialized: if `T` has padding, it's UB to call this intrinsic. At compile-time, it is furthermore UB to call this if any of the bytes in `*a` or `*b` have provenance. (The implementation is allowed to branch on the results of comparisons, which is UB if any of their inputs are `undef`.)
4220core::intrinsicsread_via_copyfunction
4221core::intrinsicssize_of_valfunctionSee [`crate::mem::size_of_val_raw`] for safety conditions.
4222core::intrinsicsslice_get_uncheckedfunction- `index < PtrMetadata(slice_ptr)`, so the indexing is in-bounds for the slice - the resulting offsetting is in-bounds of the allocation, which is always the case for references, but needs to be upheld manually for pointers
4223core::intrinsicstransmute_uncheckedfunction
4224core::intrinsicstyped_swap_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` must *not* overlap with the region of memory beginning at `y`. * The memory pointed by `x` and `y` must both contain values of type `T`. [valid]: crate::ptr#safety
4225core::intrinsicsunaligned_volatile_loadfunction
4226core::intrinsicsunaligned_volatile_storefunction
4227core::intrinsicsunchecked_addfunction
4228core::intrinsicsunchecked_divfunction
4229core::intrinsicsunchecked_funnel_shlfunction
4230core::intrinsicsunchecked_funnel_shrfunction
4231core::intrinsicsunchecked_mulfunction
4232core::intrinsicsunchecked_remfunction
4233core::intrinsicsunchecked_shlfunction
4234core::intrinsicsunchecked_shrfunction
4235core::intrinsicsunchecked_subfunction
4236core::intrinsicsunreachablefunction
4237core::intrinsicsva_argfunctionThis function is only sound to call when: - there is a next variable argument available. - the next argument's type must be ABI-compatible with the type `T`. - the next argument must have a properly initialized value of type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound.
4238core::intrinsicsva_endfunction`ap` must not be used to access variable arguments after this call.
4239core::intrinsicsvolatile_copy_memoryfunction
4240core::intrinsicsvolatile_copy_nonoverlapping_memoryfunctionThe safety requirements are consistent with [`copy_nonoverlapping`] while the read and write behaviors are volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`copy_nonoverlapping`]: ptr::copy_nonoverlapping
4241core::intrinsicsvolatile_loadfunction
4242core::intrinsicsvolatile_set_memoryfunctionThe safety requirements are consistent with [`write_bytes`] while the write behavior is volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`write_bytes`]: ptr::write_bytes
4243core::intrinsicsvolatile_storefunction
4244core::intrinsicsvtable_alignfunction`ptr` must point to a vtable.
4245core::intrinsicsvtable_sizefunction`ptr` must point to a vtable.
4246core::intrinsicswrite_bytesfunction
4247core::intrinsicswrite_via_movefunction
4248core::intrinsics::boundsBuiltinDereftraitMust actually *be* such a type.
4249core::intrinsics::boundsFloatPrimitivetraitMust actually *be* such a type.
4250core::intrinsics::simdsimd_addfunction
4251core::intrinsics::simdsimd_andfunction
4252core::intrinsics::simdsimd_arith_offsetfunction
4253core::intrinsics::simdsimd_asfunction
4254core::intrinsics::simdsimd_bitmaskfunction`x` must contain only `0` and `!0`.
4255core::intrinsics::simdsimd_bitreversefunction
4256core::intrinsics::simdsimd_bswapfunction
4257core::intrinsics::simdsimd_carryless_mulfunction
4258core::intrinsics::simdsimd_castfunctionCasting from integer types is always safe. Casting between two float types is also always safe. Casting floats to integers truncates, following the same rules as `to_int_unchecked`. Specifically, each element must: * Not be `NaN` * Not be infinite * Be representable in the return type, after truncating off its fractional part
4259core::intrinsics::simdsimd_cast_ptrfunction
4260core::intrinsics::simdsimd_ceilfunction
4261core::intrinsics::simdsimd_ctlzfunction
4262core::intrinsics::simdsimd_ctpopfunction
4263core::intrinsics::simdsimd_cttzfunction
4264core::intrinsics::simdsimd_divfunctionFor integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior.
4265core::intrinsics::simdsimd_eqfunction
4266core::intrinsics::simdsimd_expose_provenancefunction
4267core::intrinsics::simdsimd_extractfunction`idx` must be const and in-bounds of the vector.
4268core::intrinsics::simdsimd_extract_dynfunction`idx` must be in-bounds of the vector.
4269core::intrinsics::simdsimd_fabsfunction
4270core::intrinsics::simdsimd_fcosfunction
4271core::intrinsics::simdsimd_fexpfunction
4272core::intrinsics::simdsimd_fexp2function
4273core::intrinsics::simdsimd_flogfunction
4274core::intrinsics::simdsimd_flog10function
4275core::intrinsics::simdsimd_flog2function
4276core::intrinsics::simdsimd_floorfunction
4277core::intrinsics::simdsimd_fmafunction
4278core::intrinsics::simdsimd_fsinfunction
4279core::intrinsics::simdsimd_fsqrtfunction
4280core::intrinsics::simdsimd_funnel_shlfunctionEach element of `shift` must be less than `<int>::BITS`.
4281core::intrinsics::simdsimd_funnel_shrfunctionEach element of `shift` must be less than `<int>::BITS`.
4282core::intrinsics::simdsimd_gatherfunctionUnmasked values in `T` must be readable as if by `<ptr>::read` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values.
4283core::intrinsics::simdsimd_gefunction
4284core::intrinsics::simdsimd_gtfunction
4285core::intrinsics::simdsimd_insertfunction`idx` must be in-bounds of the vector.
4286core::intrinsics::simdsimd_insert_dynfunction`idx` must be in-bounds of the vector.
4287core::intrinsics::simdsimd_lefunction
4288core::intrinsics::simdsimd_ltfunction
4289core::intrinsics::simdsimd_masked_loadfunction`ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values.
4290core::intrinsics::simdsimd_masked_storefunction`ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values.
4291core::intrinsics::simdsimd_maximum_number_nszfunction
4292core::intrinsics::simdsimd_minimum_number_nszfunction
4293core::intrinsics::simdsimd_mulfunction
4294core::intrinsics::simdsimd_nefunction
4295core::intrinsics::simdsimd_negfunction
4296core::intrinsics::simdsimd_orfunction
4297core::intrinsics::simdsimd_reduce_add_orderedfunction
4298core::intrinsics::simdsimd_reduce_add_unorderedfunction
4299core::intrinsics::simdsimd_reduce_allfunction`x` must contain only `0` or `!0`.
4300core::intrinsics::simdsimd_reduce_andfunction
4301core::intrinsics::simdsimd_reduce_anyfunction`x` must contain only `0` or `!0`.
4302core::intrinsics::simdsimd_reduce_maxfunction
4303core::intrinsics::simdsimd_reduce_minfunction
4304core::intrinsics::simdsimd_reduce_mul_orderedfunction
4305core::intrinsics::simdsimd_reduce_mul_unorderedfunction
4306core::intrinsics::simdsimd_reduce_orfunction
4307core::intrinsics::simdsimd_reduce_xorfunction
4308core::intrinsics::simdsimd_relaxed_fmafunction
4309core::intrinsics::simdsimd_remfunctionFor integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior.
4310core::intrinsics::simdsimd_roundfunction
4311core::intrinsics::simdsimd_round_ties_evenfunction
4312core::intrinsics::simdsimd_saturating_addfunction
4313core::intrinsics::simdsimd_saturating_subfunction
4314core::intrinsics::simdsimd_scatterfunctionUnmasked values in `T` must be writeable as if by `<ptr>::write` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values.
4315core::intrinsics::simdsimd_selectfunction`mask` must only contain `0` and `!0`.
4316core::intrinsics::simdsimd_select_bitmaskfunction
4317core::intrinsics::simdsimd_shlfunctionEach element of `rhs` must be less than `<int>::BITS`.
4318core::intrinsics::simdsimd_shrfunctionEach element of `rhs` must be less than `<int>::BITS`.
4319core::intrinsics::simdsimd_shufflefunction
4320core::intrinsics::simdsimd_splatfunction
4321core::intrinsics::simdsimd_subfunction
4322core::intrinsics::simdsimd_truncfunction
4323core::intrinsics::simdsimd_with_exposed_provenancefunction
4324core::intrinsics::simdsimd_xorfunction
4325core::intrinsics::simd::scalablesve_tuple_create2function
4326core::intrinsics::simd::scalablesve_tuple_create3function
4327core::intrinsics::simd::scalablesve_tuple_create4function
4328core::intrinsics::simd::scalablesve_tuple_getfunction`IDX` must be in-bounds of the tuple.
4329core::intrinsics::simd::scalablesve_tuple_setfunction`IDX` must be in-bounds of the tuple.
4330core::io::borrowed_buf::BorrowedBufset_initfunctionAll the bytes of the buffer must be initialized.
4331core::io::borrowed_buf::BorrowedCursoradvancefunctionThe caller must ensure that the first `n` bytes of the cursor have been properly initialised.
4332core::io::borrowed_buf::BorrowedCursoras_mutfunctionThe caller must not uninitialize any bytes of the cursor if it is initialized.
4333core::io::borrowed_buf::BorrowedCursorset_initfunctionAll the bytes of the cursor must be initialized.
4334core::isizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add
4335core::isizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4336core::isizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul
4337core::isizeunchecked_negfunctionThis results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg
4338core::isizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl
4339core::isizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`.
4340core::isizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr
4341core::isizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`.
4342core::isizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub
4343core::iterTrustedLentraitThis trait must only be implemented when the contract is upheld. Consumers of this trait must inspect [`Iterator::size_hint()`]’s upper bound.
4344core::iterTrustedSteptraitThe implementation of [`Step`] for the given type must guarantee all invariants of all methods are upheld. See the [`Step`] trait's documentation for details. Consumers are free to rely on the invariants in unsafe code.
4345core::markerFreezetraitThis trait is a core part of the language, it is just expressed as a trait in libcore for convenience. Do *not* implement it for other types.
4346core::markerUnsafeUnpintrait
4347core::memTransmuteFromtraitIf `Dst: TransmuteFrom<Src, ASSUMPTIONS>`, the compiler guarantees that `Src` is soundly *union-transmutable* into a value of type `Dst`, provided that the programmer has guaranteed that the given [`ASSUMPTIONS`](Assume) are satisfied. A union-transmute is any bit-reinterpretation conversion in the form of: ```rust pub unsafe fn transmute_via_union<Src, Dst>(src: Src) -> Dst { use core::mem::ManuallyDrop; #[repr(C)] union Transmute<Src, Dst> { src: ManuallyDrop<Src>, dst: ManuallyDrop<Dst>, } let transmute = Transmute { src: ManuallyDrop::new(src), }; let dst = unsafe { transmute.dst }; ManuallyDrop::into_inner(dst) } ``` Note that this construction is more permissive than [`mem::transmute_copy`](super::transmute_copy); union-transmutes permit conversions that extend the bits of `Src` with trailing padding to fill trailing uninitialized bytes of `Self`; e.g.: ```rust #![feature(transmutability)] use core::mem::{Assume, TransmuteFrom}; let src = 42u8; // size = 1 #[repr(C, align(2))] struct Dst(u8); // size = 2 let _ = unsafe { <Dst as TransmuteFrom<u8, { Assume::SAFETY }>>::transmute(src) }; ```
4348core::memalign_of_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`align_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
4349core::memconjure_zstfunction- `T` must be *[inhabited]*, i.e. possible to construct. This means that types like zero-variant enums and [`!`] are unsound to conjure. - You must use the value only in ways which do not violate any *safety* invariants of the type. While it's easy to create a *valid* instance of an inhabited ZST, since having no bits in its representation means there's only one possible value, that doesn't mean that it's always *sound* to do so. For example, a library could design zero-sized tokens that are `!Default + !Clone`, limiting their creation to functions that initialize some state or establish a scope. Conjuring such a token could break invariants and lead to unsoundness.
4350core::memsize_of_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`size_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [`size_of::<T>()`]: size_of [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
4351core::memtransmutefunction
4352core::memtransmute_copyfunction
4353core::memtransmute_neofunction
4354core::memtransmute_prefixfunctionIf `size_of::<Src>() >= size_of::<Dst>()`, the first `size_of::<Dst>()` bytes of `src` must be be *valid* when interpreted as a `Dst`. (In this case, the preconditions are the same as for `transmute_copy(&ManuallyDrop::new(src))`.) If `size_of::<Src>() <= size_of::<Dst>()`, the bytes of `src` padded with uninitialized bytes afterwards up to a total size of `size_of::<Dst>()` must be *valid* when interpreted as a `Dst`. In both cases, any safety preconditions of the `Dst` type must also be upheld.
4355core::memuninitializedfunction
4356core::memzeroedfunction
4357core::mem::alignment::Alignmentnew_uncheckedfunction`align` must be a power of two. Equivalently, it must be `1 << exp` for some `exp` in `0..usize::BITS`. It must *not* be zero.
4358core::mem::alignment::Alignmentof_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Alignment::of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
4359core::mem::manually_drop::ManuallyDropdropfunctionThis function runs the destructor of the contained value. Other than changes made by the destructor itself, the memory is left unchanged, and so as far as the compiler is concerned still holds a bit-pattern which is valid for the type `T`. However, this "zombie" value should not be exposed to safe code, and this function should not be called more than once. To use a value after it's been dropped, or drop a value multiple times, can cause Undefined Behavior (depending on what `drop` does). This is normally prevented by the type system, but users of `ManuallyDrop` must uphold those guarantees without assistance from the compiler. [pinned]: crate::pin
4360core::mem::manually_drop::ManuallyDroptakefunctionThis function semantically moves out the contained value without preventing further usage, leaving the state of this container unchanged. It is your responsibility to ensure that this `ManuallyDrop` is not used again.
4361core::mem::maybe_uninit::MaybeUninitarray_assume_initfunctionIt is up to the caller to guarantee that all elements of the array are in an initialized state.
4362core::mem::maybe_uninit::MaybeUninitassume_initfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state, i.e., a state that is considered ["valid" for type `T`][validity]. Calling this when the content is not yet fully initialized causes immediate undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. It is a common mistake to assume that this function is safe to call on integers because they can hold all bit patterns. It is also a common mistake to think that calling this function is UB if any byte is uninitialized. Both of these assumptions are wrong. If that is surprising to you, please read the [type-level documentation][inv]. [inv]: #initialization-invariant [validity]: ../../reference/behavior-considered-undefined.html#r-undefined.validity On top of that, remember that most types have additional invariants beyond merely being considered initialized at the type level. For example, a `1`-initialized [`Vec<T>`] is considered initialized (under the current implementation; this does not constitute a stable guarantee) because the only requirement the compiler knows about it is that the data pointer must be non-null. Creating such a `Vec<T>` does not cause *immediate* undefined behavior, but will cause undefined behavior with most safe operations (including dropping it). [`Vec<T>`]: ../../std/vec/struct.Vec.html
4363core::mem::maybe_uninit::MaybeUninitassume_init_dropfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behavior. [`assume_init`]: MaybeUninit::assume_init
4364core::mem::maybe_uninit::MaybeUninitassume_init_mutfunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit`.
4365core::mem::maybe_uninit::MaybeUninitassume_init_readfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. Moreover, similar to the [`ptr::read`] function, this function creates a bitwise copy of the contents, regardless whether the contained type implements the [`Copy`] trait or not. When using multiple copies of the data (by calling `assume_init_read` multiple times, or first calling `assume_init_read` and then [`assume_init`]), it is your responsibility to ensure that data may indeed be duplicated. [inv]: #initialization-invariant [`assume_init`]: MaybeUninit::assume_init
4366core::mem::maybe_uninit::MaybeUninitassume_init_reffunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state.
4367core::numZeroablePrimitivetraitTypes implementing this trait must be primitives that are valid when zeroed. The associated `Self::NonZeroInner` type must have the same size+align as `Self`, but with a niche and bit validity making it so the following `transmutes` are sound: - `Self::NonZeroInner` to `Option<Self::NonZeroInner>` - `Option<Self::NonZeroInner>` to `Self` (And, consequently, `Self::NonZeroInner` to `Self`.)
4368core::num::nonzero::NonZerofrom_mut_uncheckedfunctionThe referenced value must not be zero.
4369core::num::nonzero::NonZeronew_uncheckedfunctionThe value must not be zero.
4370core::num::nonzero::NonZerounchecked_addfunctionThis results in undefined behavior when `self + rhs > usize::MAX`.
This results in undefined behavior when `self + rhs > u128::MAX`.
This results in undefined behavior when `self + rhs > u64::MAX`.
This results in undefined behavior when `self + rhs > u32::MAX`.
This results in undefined behavior when `self + rhs > u16::MAX`.
This results in undefined behavior when `self + rhs > u8::MAX`.
4371core::num::nonzero::NonZerounchecked_mulfunctionThis results in undefined behavior when `self * rhs > i64::MAX`, or `self * rhs < i64::MIN`.
This results in undefined behavior when `self * rhs > i8::MAX`, or `self * rhs < i8::MIN`.
This results in undefined behavior when `self * rhs > isize::MAX`, or `self * rhs < isize::MIN`.
This results in undefined behavior when `self * rhs > i32::MAX`, or `self * rhs < i32::MIN`.
This results in undefined behavior when `self * rhs > usize::MAX`.
This results in undefined behavior when `self * rhs > i128::MAX`, or `self * rhs < i128::MIN`.
This results in undefined behavior when `self * rhs > u128::MAX`.
This results in undefined behavior when `self * rhs > u64::MAX`.
This results in undefined behavior when `self * rhs > u32::MAX`.
This results in undefined behavior when `self * rhs > i16::MAX`, or `self * rhs < i16::MIN`.
This results in undefined behavior when `self * rhs > u16::MAX`.
This results in undefined behavior when `self * rhs > u8::MAX`.
4372core::opsDerefPuretrait
4373core::option::Optionunwrap_uncheckedfunctionCalling this method on [`None`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4374core::pinPinCoerceUnsizedtraitGiven a pointer of this type, the concrete type returned by its `deref` method and (if it implements `DerefMut`) its `deref_mut` method must be the same type and must not change without a modification. The following operations are not considered modifications: * Moving the pointer. * Performing unsizing coercions on the pointer. * Performing dynamic dispatch with the pointer. * Calling `deref` or `deref_mut` on the pointer. The concrete type of a trait object is the type that the vtable corresponds to. The concrete type of a slice is an array of the same element type and the length specified in the metadata. The concrete type of a sized type is the type itself.
4375core::pin::Pinget_unchecked_mutfunctionThis function is unsafe. You must guarantee that you will never move the data out of the mutable reference you receive when you call this function, so that the invariants on the `Pin` type can be upheld. If the underlying data is `Unpin`, `Pin::get_mut` should be used instead.
4376core::pin::Pininto_inner_uncheckedfunctionThis function is unsafe. You must guarantee that you will continue to treat the pointer `Ptr` as pinned after you call this function, so that the invariants on the `Pin` type can be upheld. If the code using the resulting `Ptr` does not continue to maintain the pinning invariants that is a violation of the API contract and may lead to undefined behavior in later (safe) operations. Note that you must be able to guarantee that the data pointed to by `Ptr` will be treated as pinned all the way until its `drop` handler is complete! *For more information, see the [`pin` module docs][self]* If the underlying data is [`Unpin`], [`Pin::into_inner`] should be used instead.
4377core::pin::Pinmap_uncheckedfunctionThis function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning
4378core::pin::Pinmap_unchecked_mutfunctionThis function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning
4379core::pin::Pinnew_uncheckedfunctionThis constructor is unsafe because we cannot guarantee that the data pointed to by `pointer` is pinned. At its core, pinning a value means making the guarantee that the value's data will not be moved nor have its storage invalidated until it gets dropped. For a more thorough explanation of pinning, see the [`pin` module docs]. If the caller that is constructing this `Pin<Ptr>` does not ensure that the data `Ptr` points to is pinned, that is a violation of the API contract and may lead to undefined behavior in later (even safe) operations. By using this method, you are also making a promise about the [`Deref`], [`DerefMut`], and [`Drop`] implementations of `Ptr`, if they exist. Most importantly, they must not move out of their `self` arguments: `Pin::as_mut` and `Pin::as_ref` will call `DerefMut::deref_mut` and `Deref::deref` *on the pointer type `Ptr`* and expect these methods to uphold the pinning invariants. Moreover, by calling this method you promise that the reference `Ptr` dereferences to will not be moved out of again; in particular, it must not be possible to obtain a `&mut Ptr::Target` and then move out of that reference (using, for example [`mem::swap`]). For example, calling `Pin::new_unchecked` on an `&'a mut T` is unsafe because while you are able to pin it for the given lifetime `'a`, you have no control over whether it is kept pinned once `'a` ends, and therefore cannot uphold the guarantee that a value, once pinned, remains pinned until it is dropped: ``` use std::mem; use std::pin::Pin; fn move_pinned_ref<T>(mut a: T, mut b: T) { unsafe { let p: Pin<&mut T> = Pin::new_unchecked(&mut a); // This should mean the pointee `a` can never move again. } mem::swap(&mut a, &mut b); // Potential UB down the road ⚠️ // The address of `a` changed to `b`'s stack slot, so `a` got moved even // though we have previously pinned it! We have violated the pinning API contract. } ``` A value, once pinned, must remain pinned until it is dropped (unless its type implements `Unpin`). Because `Pin<&mut T>` does not own the value, dropping the `Pin` will not drop the value and will not end the pinning contract. So moving the value after dropping the `Pin<&mut T>` is still a violation of the API contract. Similarly, calling `Pin::new_unchecked` on an `Rc<T>` is unsafe because there could be aliases to the same data that are not subject to the pinning restrictions: ``` use std::rc::Rc; use std::pin::Pin; fn move_pinned_rc<T>(mut x: Rc<T>) { // This should mean the pointee can never move again. let pin = unsafe { Pin::new_unchecked(Rc::clone(&x)) }; { let p: Pin<&T> = pin.as_ref(); // ... } drop(pin); let content = Rc::get_mut(&mut x).unwrap(); // Potential UB down the road ⚠️ // Now, if `x` was the only reference, we have a mutable reference to // data that we pinned above, which we could use to move it as we have // seen in the previous example. We have violated the pinning API contract. } ```
4380core::pointeraddfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_add`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_add`]: #method.wrapping_add [allocation]: crate::ptr#allocation
4381core::pointeras_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4382core::pointeras_mut_uncheckedfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4383core::pointeras_reffunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4384core::pointeras_ref_uncheckedfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4385core::pointeras_uninit_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4386core::pointeras_uninit_reffunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
4387core::pointeras_uninit_slicefunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation
4388core::pointeras_uninit_slice_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation
4389core::pointerbyte_addfunction
4390core::pointerbyte_offsetfunction
4391core::pointerbyte_offset_fromfunction
4392core::pointerbyte_offset_from_unsignedfunction
4393core::pointerbyte_subfunction
4394core::pointercopy_fromfunction
4395core::pointercopy_from_nonoverlappingfunction
4396core::pointercopy_tofunction
4397core::pointercopy_to_nonoverlappingfunction
4398core::pointerdrop_in_placefunction
4399core::pointerget_uncheckedfunction
4400core::pointerget_unchecked_mutfunction
4401core::pointeroffsetfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Note that "range" here refers to a half-open range as usual in Rust, i.e., `self..result` for non-negative offsets and `result..self` for negative offsets. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_offset`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_offset`]: #method.wrapping_offset [allocation]: crate::ptr#allocation
4402core::pointeroffset_fromfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be [derived from][crate::ptr#provenance] a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation
4403core::pointeroffset_from_unsignedfunction- The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`.
4404core::pointerreadfunction
4405core::pointerread_unalignedfunction
4406core::pointerread_volatilefunction
4407core::pointerreplacefunction
4408core::pointersplit_at_mutfunction`mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. Since `len` being in-bounds is not a safety invariant of `*mut [T]` the safety requirements of this method are the same as for [`split_at_mut_unchecked`]. The explicit bounds check is only as useful as `len` is correct. [`split_at_mut_unchecked`]: #method.split_at_mut_unchecked [in-bounds]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4409core::pointersplit_at_mut_uncheckedfunction`mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. [in-bounds]: #method.add [out-of-bounds index]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4410core::pointersubfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_sub`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_sub`]: #method.wrapping_sub [allocation]: crate::ptr#allocation
4411core::pointerswapfunction
4412core::pointerwritefunction
4413core::pointerwrite_bytesfunction
4414core::pointerwrite_unalignedfunction
4415core::pointerwrite_volatilefunction
4416core::prelude::v1Sendtrait
4417core::prelude::v1Synctrait
4418core::ptrcopyfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0, and `dst` must remain valid even when `src` is read for `count * size_of::<T>()` bytes. (This means if the memory ranges overlap, the `dst` pointer must not be invalidated by `src` reads.) * Both `src` and `dst` must be properly aligned. Like [`read`], `copy` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety
4419core::ptrcopy_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0. * Both `src` and `dst` must be properly aligned. * The region of memory beginning at `src` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `dst` with the same size. Like [`read`], `copy_nonoverlapping` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using *both* the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety
4420core::ptrdrop_in_placefunctionBehavior is undefined if any of the following conditions are violated: * `to_drop` must be [valid] for both reads and writes. * `to_drop` must be properly aligned, even if `T` has size 0. * `to_drop` must be nonnull, even if `T` has size 0. * The value `to_drop` points to must be valid for dropping, which may mean it must uphold additional invariants. These invariants depend on the type of the value being dropped. For instance, when dropping a Box, the box's pointer to the heap must be valid. * While `drop_in_place` is executing, the only way to access parts of `to_drop` is through the `&mut self` references supplied to the `Drop::drop` methods that `drop_in_place` invokes. Additionally, if `T` is not [`Copy`], using the pointed-to value after calling `drop_in_place` can cause undefined behavior. Note that `*to_drop = foo` counts as a use because it will cause the value to be dropped again. [`write()`] can be used to overwrite data without causing it to be dropped. [valid]: self#safety
4421core::ptrreadfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads or `T` must be a ZST. * `src` must be properly aligned. Use [`read_unaligned`] if this is not the case. * `src` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned.
4422core::ptrread_unalignedfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads. * `src` must point to a properly initialized value of type `T`. Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. [read-ownership]: read#ownership-of-the-returned-value [valid]: self#safety
4423core::ptrread_volatilefunctionLike [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. However, storing non-[`Copy`] types in volatile memory is almost certainly incorrect. Behavior is undefined if any of the following conditions are violated: * `src` must be either [valid] for reads, or `T` must be a ZST, or `src` must point to memory outside of all Rust allocations and reading from that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `src` must be properly aligned. * Reading from `src` must produce a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety [read-ownership]: read#ownership-of-the-returned-value
4424core::ptrreplacefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for both reads and writes or `T` must be a ZST. * `dst` must be properly aligned. * `dst` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
4425core::ptrswapfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the other pointer is written. (This means if the memory ranges overlap, the two pointers must not be subject to aliasing restrictions relative to each other.) * Both `x` and `y` must be properly aligned. Note that even if `T` has size `0`, the pointers must be properly aligned. [valid]: self#safety
4426core::ptrswap_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes of `count * size_of::<T>()` bytes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `y` with the same size. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [valid]: self#safety
4427core::ptrwritefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes or `T` must be a ZST. * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the case. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
4428core::ptrwrite_bytesfunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes. * `dst` must be properly aligned. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointer must be properly aligned. Additionally, note that changing `*dst` in this way can easily lead to undefined behavior (UB) later if the written bytes are not a valid representation of some `T`. For instance, the following is an **incorrect** use of this function: ```rust,no_run unsafe { let mut value: u8 = 0; let ptr: *mut bool = &mut value as *mut u8 as *mut bool; let _bool = ptr.read(); // This is fine, `ptr` points to a valid `bool`. ptr.write_bytes(42u8, 1); // This function itself does not cause UB... let _bool = ptr.read(); // ...but it makes this operation UB! ⚠️ } ``` [valid]: crate::ptr#safety
4429core::ptrwrite_unalignedfunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes. [valid]: self#safety
4430core::ptrwrite_volatilefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be either [valid] for writes, or `T` must be a ZST, or `dst` must point to memory outside of all Rust allocations and writing to that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `dst` must be properly aligned. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
4431core::ptr::non_null::NonNulladdfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
4432core::ptr::non_null::NonNullas_mutfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4433core::ptr::non_null::NonNullas_reffunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
4434core::ptr::non_null::NonNullas_uninit_mutfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
4435core::ptr::non_null::NonNullas_uninit_reffunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
4436core::ptr::non_null::NonNullas_uninit_slicefunctionWhen calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`]. [valid]: crate::ptr#safety
4437core::ptr::non_null::NonNullas_uninit_slice_mutfunctionWhen calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`]. [valid]: crate::ptr#safety
4438core::ptr::non_null::NonNullbyte_addfunction
4439core::ptr::non_null::NonNullbyte_offsetfunction
4440core::ptr::non_null::NonNullbyte_offset_fromfunction
4441core::ptr::non_null::NonNullbyte_offset_from_unsignedfunction
4442core::ptr::non_null::NonNullbyte_subfunction
4443core::ptr::non_null::NonNullcopy_fromfunction
4444core::ptr::non_null::NonNullcopy_from_nonoverlappingfunction
4445core::ptr::non_null::NonNullcopy_tofunction
4446core::ptr::non_null::NonNullcopy_to_nonoverlappingfunction
4447core::ptr::non_null::NonNulldrop_in_placefunction
4448core::ptr::non_null::NonNullget_unchecked_mutfunction
4449core::ptr::non_null::NonNullnew_uncheckedfunction`ptr` must be non-null.
4450core::ptr::non_null::NonNulloffsetfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
4451core::ptr::non_null::NonNulloffset_fromfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be *derived from* a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation
4452core::ptr::non_null::NonNulloffset_from_unsignedfunction- The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`.
4453core::ptr::non_null::NonNullreadfunction
4454core::ptr::non_null::NonNullread_unalignedfunction
4455core::ptr::non_null::NonNullread_volatilefunction
4456core::ptr::non_null::NonNullreplacefunction
4457core::ptr::non_null::NonNullsubfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
4458core::ptr::non_null::NonNullswapfunction
4459core::ptr::non_null::NonNullwritefunction
4460core::ptr::non_null::NonNullwrite_bytesfunction
4461core::ptr::non_null::NonNullwrite_unalignedfunction
4462core::ptr::non_null::NonNullwrite_volatilefunction
4463core::result::Resultunwrap_err_uncheckedfunctionCalling this method on an [`Ok`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4464core::result::Resultunwrap_uncheckedfunctionCalling this method on an [`Err`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4465core::sliceGetDisjointMutIndextraitIf `is_in_bounds()` returns `true` and `is_overlapping()` returns `false`, it must be safe to index the slice with the indices.
4466core::sliceSliceIndextrait
4467core::slicealign_tofunctionThis method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
4468core::slicealign_to_mutfunctionThis method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
4469core::sliceas_ascii_uncheckedfunctionEvery byte in the slice must be in `0..=127`, or else this is UB.
4470core::sliceas_chunks_uncheckedfunctionThis may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`.
4471core::sliceas_chunks_unchecked_mutfunctionThis may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`.
4472core::sliceassume_init_dropfunctionIt is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behaviour.
4473core::sliceassume_init_mutfunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit` slice.
4474core::sliceassume_init_reffunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state.
4475core::slicefrom_mut_ptr_rangefunctionBehavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_mut_ptr_range`] fulfills these requirements.
4476core::slicefrom_ptr_rangefunctionBehavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_ptr_range`] fulfills these requirements.
4477core::slicefrom_raw_partsfunctionBehavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for reads for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. See [below](#incorrect-usage) for an example incorrectly not taking this into account. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`].
4478core::slicefrom_raw_parts_mutfunctionBehavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for both reads and writes for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. [valid]: ptr#safety [`NonNull::dangling()`]: ptr::NonNull::dangling
4479core::sliceget_disjoint_unchecked_mutfunctionCalling this method with overlapping or out-of-bounds indices is *[undefined behavior]* even if the resulting references are not used.
4480core::sliceget_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get(index).unwrap_unchecked()`. It's UB to call `.get_unchecked(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked(..len + 1)`, `.get_unchecked(..=len)`, or similar. [`get`]: slice::get [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4481core::sliceget_unchecked_mutfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get_mut(index).unwrap_unchecked()`. It's UB to call `.get_unchecked_mut(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked_mut(..len + 1)`, `.get_unchecked_mut(..=len)`, or similar. [`get_mut`]: slice::get_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4482core::slicesplit_at_mut_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at_mut`]: slice::split_at_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4483core::slicesplit_at_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at`]: slice::split_at [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4484core::sliceswap_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]*. The caller has to ensure that `a < self.len()` and `b < self.len()`.
4485core::stras_ascii_uncheckedfunctionEvery character in this string must be ASCII, or else this is UB.
4486core::stras_bytes_mutfunctionThe caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior.
4487core::strfrom_raw_partsfunction
4488core::strfrom_raw_parts_mutfunction
4489core::strfrom_utf8_uncheckedfunctionThe bytes passed in must be valid UTF-8.
4490core::strfrom_utf8_unchecked_mutfunction
4491core::strget_uncheckedfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
4492core::strget_unchecked_mutfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
4493core::strnext_code_pointfunction`bytes` must produce a valid UTF-8-like (UTF-8 or WTF-8) string
4494core::strslice_mut_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
4495core::strslice_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
4496core::str::patternReverseSearchertrait
4497core::str::patternSearchertrait
4498core::sync::atomicAtomicPrimitivetrait
4499core::sync::atomic::Atomicfrom_ptrfunction* `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this can be bigger than `align_of::<*mut T>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI64>()` (note that on some platforms this can be bigger than `align_of::<i64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since `align_of::<AtomicBool>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU32>()` (note that on some platforms this can be bigger than `align_of::<u32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI32>()` (note that on some platforms this can be bigger than `align_of::<i32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU16>()` (note that on some platforms this can be bigger than `align_of::<u16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI8>()` (note that this is always true, since `align_of::<AtomicI8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI16>()` (note that on some platforms this can be bigger than `align_of::<i16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicUsize>()` (note that on some platforms this can be bigger than `align_of::<usize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU8>()` (note that this is always true, since `align_of::<AtomicU8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicIsize>()` (note that on some platforms this can be bigger than `align_of::<isize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU64>()` (note that on some platforms this can be bigger than `align_of::<u64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
4500core::task::wake::LocalWakerfrom_rawfunction
4501core::task::wake::LocalWakernewfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld.
4502core::task::wake::Wakerfrom_rawfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWaker`]'s and [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html
4503core::task::wake::WakernewfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html
4504core::u128unchecked_addfunctionThis results in undefined behavior when `self + rhs > u128::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add
4505core::u128unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4506core::u128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4507core::u128unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u128::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4508core::u128unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u128::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4509core::u128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u128::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul
4510core::u128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl
4511core::u128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`.
4512core::u128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr
4513core::u128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`.
4514core::u128unchecked_subfunctionThis results in undefined behavior when `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub
4515core::u16unchecked_addfunctionThis results in undefined behavior when `self + rhs > u16::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add
4516core::u16unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4517core::u16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4518core::u16unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u16::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4519core::u16unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u16::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4520core::u16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u16::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul
4521core::u16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl
4522core::u16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`.
4523core::u16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr
4524core::u16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`.
4525core::u16unchecked_subfunctionThis results in undefined behavior when `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub
4526core::u32unchecked_addfunctionThis results in undefined behavior when `self + rhs > u32::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add
4527core::u32unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4528core::u32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4529core::u32unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u32::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4530core::u32unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u32::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4531core::u32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u32::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul
4532core::u32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl
4533core::u32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`.
4534core::u32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr
4535core::u32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`.
4536core::u32unchecked_subfunctionThis results in undefined behavior when `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub
4537core::u64unchecked_addfunctionThis results in undefined behavior when `self + rhs > u64::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add
4538core::u64unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4539core::u64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4540core::u64unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u64::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4541core::u64unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u64::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4542core::u64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u64::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul
4543core::u64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl
4544core::u64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`.
4545core::u64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr
4546core::u64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`.
4547core::u64unchecked_subfunctionThis results in undefined behavior when `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub
4548core::u8as_ascii_uncheckedfunctionThis byte must be valid ASCII, or else this is UB.
4549core::u8unchecked_addfunctionThis results in undefined behavior when `self + rhs > u8::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add
4550core::u8unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4551core::u8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4552core::u8unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u8::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4553core::u8unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u8::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4554core::u8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u8::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul
4555core::u8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl
4556core::u8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`.
4557core::u8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr
4558core::u8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`.
4559core::u8unchecked_subfunctionThis results in undefined behavior when `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub
4560core::usizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > usize::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add
4561core::usizeunchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4562core::usizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4563core::usizeunchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `usize::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4564core::usizeunchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `usize::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4565core::usizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > usize::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul
4566core::usizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl
4567core::usizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`.
4568core::usizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr
4569core::usizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`.
4570core::usizeunchecked_subfunctionThis results in undefined behavior when `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub
4571std::charas_ascii_uncheckedfunctionThis char must be within the ASCII range, or else this is UB.
4572std::charfrom_u32_uncheckedfunctionThis function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32
4573std::collections::hash::map::HashMapget_disjoint_unchecked_mutfunctionCalling this method with overlapping keys is *[undefined behavior]* even if the resulting references are not used. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
4574std::envremove_varfunctionThis function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To prevent a child process from inheriting an environment variable, you can instead use [`Command::env_remove`] or [`Command::env_clear`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env_remove`]: crate::process::Command::env_remove [`Command::env_clear`]: crate::process::Command::env_clear
4575std::envset_varfunctionThis function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla (for POSIX)](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To pass an environment variable to a child process, you can instead use [`Command::env`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env`]: crate::process::Command::env
4576std::f128to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4577std::f16to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4578std::f32to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4579std::f64to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
4580std::ffi::os_str::OsStrfrom_encoded_bytes_uncheckedfunctionAs the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsStr` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring.
4581std::ffi::os_str::OsStringfrom_encoded_bytes_uncheckedfunctionAs the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsString` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring.
4582std::i128unchecked_addfunctionThis results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add
4583std::i128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4584std::i128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul
4585std::i128unchecked_negfunctionThis results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg
4586std::i128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl
4587std::i128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`.
4588std::i128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr
4589std::i128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`.
4590std::i128unchecked_subfunctionThis results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub
4591std::i16unchecked_addfunctionThis results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add
4592std::i16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4593std::i16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul
4594std::i16unchecked_negfunctionThis results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg
4595std::i16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl
4596std::i16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`.
4597std::i16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr
4598std::i16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`.
4599std::i16unchecked_subfunctionThis results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub
4600std::i32unchecked_addfunctionThis results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add
4601std::i32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4602std::i32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul
4603std::i32unchecked_negfunctionThis results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg
4604std::i32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl
4605std::i32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`.
4606std::i32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr
4607std::i32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`.
4608std::i32unchecked_subfunctionThis results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub
4609std::i64unchecked_addfunctionThis results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add
4610std::i64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4611std::i64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul
4612std::i64unchecked_negfunctionThis results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg
4613std::i64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl
4614std::i64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`.
4615std::i64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr
4616std::i64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`.
4617std::i64unchecked_subfunctionThis results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub
4618std::i8unchecked_addfunctionThis results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add
4619std::i8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4620std::i8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul
4621std::i8unchecked_negfunctionThis results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg
4622std::i8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl
4623std::i8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`.
4624std::i8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr
4625std::i8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`.
4626std::i8unchecked_subfunctionThis results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub
4627std::isizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add
4628std::isizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4629std::isizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul
4630std::isizeunchecked_negfunctionThis results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg
4631std::isizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl
4632std::isizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`.
4633std::isizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr
4634std::isizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`.
4635std::isizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub
4636std::os::fd::owned::BorrowedFdborrow_rawfunctionThe resource pointed to by `fd` must remain open for the duration of the returned `BorrowedFd`.
4637std::os::windows::io::handle::BorrowedHandleborrow_rawfunctionThe resource pointed to by `handle` must be a valid open handle, it must remain open for the duration of the returned `BorrowedHandle`. Note that it *may* have the value `INVALID_HANDLE_VALUE` (-1), which is sometimes a valid handle value. See [here] for the full story. And, it *may* have the value `NULL` (0), which can occur when consoles are detached from processes, or when `windows_subsystem` is used. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
4638std::os::windows::io::handle::HandleOrInvalidfrom_raw_handlefunctionThe passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be `INVALID_HANDLE_VALUE` (-1). Note that not all Windows APIs use `INVALID_HANDLE_VALUE` for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
4639std::os::windows::io::handle::HandleOrNullfrom_raw_handlefunctionThe passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be null. Note that not all Windows APIs use null for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
4640std::os::windows::io::socket::BorrowedSocketborrow_rawfunctionThe resource pointed to by `socket` must remain open for the duration of the returned `BorrowedSocket`, and it must not have the value `INVALID_SOCKET`.
4641std::os::windows::process::ProcThreadAttributeListBuilderraw_attributefunctionThis function is marked as `unsafe` because it deals with raw pointers and sizes. It is the responsibility of the caller to ensure the value lives longer than the resulting [`ProcThreadAttributeList`] as well as the validity of the size parameter.
4642std::stras_ascii_uncheckedfunctionEvery character in this string must be ASCII, or else this is UB.
4643std::stras_bytes_mutfunctionThe caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior.
4644std::strfrom_utf8_uncheckedfunctionThe bytes passed in must be valid UTF-8.
4645std::strfrom_utf8_unchecked_mutfunction
4646std::strget_uncheckedfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
4647std::strget_unchecked_mutfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
4648std::strslice_mut_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
4649std::strslice_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
4650std::thread::builder::Builderspawn_uncheckedfunctionThe caller has to ensure that the spawned thread does not outlive any references in the supplied thread closure and its return type. This can be guaranteed in two ways: - ensure that [`join`][`JoinHandle::join`] is called before any referenced data is dropped - use only types with `'static` lifetime bounds, i.e., those with no or only `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`] and [`thread::spawn`] enforce this property statically)
4651std::thread::thread::Threadfrom_rawfunctionThis function is unsafe because improper use may lead to memory unsafety, even if the returned `Thread` is never accessed. Creating a `Thread` from a pointer other than one returned from [`Thread::into_raw`] is **undefined behavior**. Calling this function twice on the same raw pointer can lead to a double-free if both `Thread` instances are dropped.
4652std::u128unchecked_addfunctionThis results in undefined behavior when `self + rhs > u128::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add
4653std::u128unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4654std::u128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4655std::u128unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u128::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4656std::u128unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u128::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4657std::u128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u128::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul
4658std::u128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl
4659std::u128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`.
4660std::u128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr
4661std::u128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`.
4662std::u128unchecked_subfunctionThis results in undefined behavior when `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub
4663std::u16unchecked_addfunctionThis results in undefined behavior when `self + rhs > u16::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add
4664std::u16unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4665std::u16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4666std::u16unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u16::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4667std::u16unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u16::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4668std::u16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u16::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul
4669std::u16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl
4670std::u16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`.
4671std::u16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr
4672std::u16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`.
4673std::u16unchecked_subfunctionThis results in undefined behavior when `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub
4674std::u32unchecked_addfunctionThis results in undefined behavior when `self + rhs > u32::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add
4675std::u32unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4676std::u32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4677std::u32unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u32::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4678std::u32unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u32::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4679std::u32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u32::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul
4680std::u32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl
4681std::u32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`.
4682std::u32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr
4683std::u32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`.
4684std::u32unchecked_subfunctionThis results in undefined behavior when `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub
4685std::u64unchecked_addfunctionThis results in undefined behavior when `self + rhs > u64::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add
4686std::u64unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4687std::u64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4688std::u64unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u64::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4689std::u64unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u64::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4690std::u64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u64::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul
4691std::u64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl
4692std::u64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`.
4693std::u64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr
4694std::u64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`.
4695std::u64unchecked_subfunctionThis results in undefined behavior when `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub
4696std::u8as_ascii_uncheckedfunctionThis byte must be valid ASCII, or else this is UB.
4697std::u8unchecked_addfunctionThis results in undefined behavior when `self + rhs > u8::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add
4698std::u8unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4699std::u8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4700std::u8unchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `u8::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4701std::u8unchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `u8::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4702std::u8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u8::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul
4703std::u8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl
4704std::u8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`.
4705std::u8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr
4706std::u8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`.
4707std::u8unchecked_subfunctionThis results in undefined behavior when `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub
4708std::usizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > usize::MAX`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add
4709std::usizeunchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
4710std::usizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
4711std::usizeunchecked_funnel_shlfunctionThis results in undefined behavior if `n` is greater than or equal to `usize::BITS`, i.e. when [`funnel_shl`](Self::funnel_shl) would panic.
4712std::usizeunchecked_funnel_shrfunctionThis results in undefined behavior if `n` is greater than or equal to `usize::BITS`, i.e. when [`funnel_shr`](Self::funnel_shr) would panic.
4713std::usizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > usize::MAX`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul
4714std::usizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl
4715std::usizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`.
4716std::usizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr
4717std::usizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`.
4718std::usizeunchecked_subfunctionThis results in undefined behavior when `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub