Public Unsafe APIs — nightly (rustc 1.96.0-nightly (d9563937f 2026-03-03))

Generated from crates: core, alloc, std.

IndexModule PathAPI NameKindSafety Doc Mark
1alloc::allocallocfunctionSee [`GlobalAlloc::alloc`].
2alloc::allocalloc_zeroedfunctionSee [`GlobalAlloc::alloc_zeroed`].
3alloc::allocdeallocfunctionSee [`GlobalAlloc::dealloc`].
4alloc::allocreallocfunctionSee [`GlobalAlloc::realloc`].
5alloc::boxed::Boxassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the values really are in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
6alloc::boxed::Boxdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
7alloc::boxed::Boxfrom_non_nullfunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same `NonNull` pointer. The non-null pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section.
8alloc::boxed::Boxfrom_non_null_infunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The non-null pointer must point to a block of memory allocated by `alloc`.
9alloc::boxed::Boxfrom_rawfunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section.
10alloc::boxed::Boxfrom_raw_infunctionThis function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by `alloc`.
11alloc::collections::binary_heap::BinaryHeapfrom_raw_vecfunctionThe supplied `vec` must be a max-heap, i.e. for all indices `0 < i < vec.len()`, `vec[(i - 1) / 2] >= vec[i]`.
12alloc::collections::btree::map::CursorMutinsert_after_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
13alloc::collections::btree::map::CursorMutinsert_before_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
14alloc::collections::btree::map::CursorMutwith_mutable_keyfunctionSince this cursor allows mutating keys, you must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
15alloc::collections::btree::map::CursorMutKeyinsert_after_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
16alloc::collections::btree::map::CursorMutKeyinsert_before_uncheckedfunctionYou must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order.
17alloc::collections::btree::set::CursorMutinsert_after_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
18alloc::collections::btree::set::CursorMutinsert_before_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
19alloc::collections::btree::set::CursorMutwith_mutable_keyfunctionSince this cursor allows mutating elements, you must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
20alloc::collections::btree::set::CursorMutKeyinsert_after_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
21alloc::collections::btree::set::CursorMutKeyinsert_before_uncheckedfunctionYou must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order.
22alloc::ffi::c_str::CStringfrom_rawfunctionThis should only ever be called with a pointer that was earlier obtained by calling [`CString::into_raw`], and the memory it points to must not be accessed through any other pointer during the lifetime of reconstructed `CString`. Other usage (e.g., trying to take ownership of a string that was allocated by foreign code) is likely to lead to undefined behavior or allocator corruption. This function does not validate ownership of the raw pointer's memory. A double-free may occur if the function is called twice on the same raw pointer. Additionally, the caller must ensure the pointer is not dangling. It should be noted that the length isn't just "recomputed," but that the recomputed length must match the original length from the [`CString::into_raw`] call. This means the [`CString::into_raw`]/`from_raw` methods should not be used when passing the string to C functions that can modify the string's length. > **Note:** If you need to borrow a string that was allocated by > foreign code, use [`CStr`]. If you need to take ownership of > a string that was allocated by foreign code, you will need to > make your own provisions for freeing it appropriately, likely > with the foreign code's API to do that.
23alloc::ffi::c_str::CStringfrom_vec_uncheckedfunction
24alloc::ffi::c_str::CStringfrom_vec_with_nul_uncheckedfunctionThe given [`Vec`] **must** have one nul byte as its last element. This means it cannot be empty nor have any other nul byte anywhere else.
25alloc::rc::Rcassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
26alloc::rc::Rcdecrement_strong_countfunctionThe pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in
27alloc::rc::Rcdecrement_strong_count_infunctionThe pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in
28alloc::rc::Rcdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
29alloc::rc::Rcfrom_rawfunction
30alloc::rc::Rcfrom_raw_infunction
31alloc::rc::Rcget_mut_uncheckedfunctionIf any other `Rc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Rc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Rc::new`.
32alloc::rc::Rcincrement_strong_countfunctionThe pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Rc::from_raw_in
33alloc::rc::Rcincrement_strong_count_infunctionThe pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Rc::from_raw_in
34alloc::rc::Weakfrom_rawfunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by the global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
35alloc::rc::Weakfrom_raw_infunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
36alloc::strfrom_boxed_utf8_uncheckedfunction* The provided bytes must contain a valid UTF-8 sequence.
37alloc::string::Stringas_mut_vecfunctionThis function is unsafe because the returned `&mut Vec` allows writing bytes which are not valid UTF-8. If this constraint is violated, using the original `String` after dropping the `&mut Vec` may violate memory safety, as the rest of the standard library assumes that `String`s are valid UTF-8.
38alloc::string::Stringfrom_raw_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * all safety requirements for [`Vec::<u8>::from_raw_parts`]. * all safety requirements for [`String::from_utf8_unchecked`]. Violating these may cause problems like corrupting the allocator's internal data structures. For example, it is normally **not** safe to build a `String` from a pointer to a C `char` array containing UTF-8 _unless_ you are certain that array was originally allocated by the Rust standard library's allocator. The ownership of `buf` is effectively transferred to the `String` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function.
39alloc::string::Stringfrom_utf8_uncheckedfunctionThis function is unsafe because it does not check that the bytes passed to it are valid UTF-8. If this constraint is violated, it may cause memory unsafety issues with future users of the `String`, as the rest of the standard library assumes that `String`s are valid UTF-8.
40alloc::sync::Arcassume_initfunctionAs with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init
41alloc::sync::Arcdecrement_strong_countfunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in
42alloc::sync::Arcdecrement_strong_count_infunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in
43alloc::sync::Arcdowncast_uncheckedfunctionThe contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast
44alloc::sync::Arcfrom_rawfunction
45alloc::sync::Arcfrom_raw_infunction
46alloc::sync::Arcget_mut_uncheckedfunctionIf any other `Arc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Arc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Arc::new`.
47alloc::sync::Arcincrement_strong_countfunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Arc::from_raw_in
48alloc::sync::Arcincrement_strong_count_infunctionThe pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Arc::from_raw_in
49alloc::sync::Weakfrom_rawfunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
50alloc::sync::Weakfrom_raw_infunctionThe pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`].
51alloc::vec::Vecfrom_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`NonNull::slice_from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
52alloc::vec::Vecfrom_parts_infunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting
53alloc::vec::Vecfrom_raw_partsfunctionThis is highly unsafe, due to the number of invariants that aren't checked: * If `T` is not a zero-sized type and the capacity is nonzero, `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. If `T` is a zero-sized type or the capacity is zero, `ptr` need only be non-null and aligned. * `T` needs to have the same alignment as what `ptr` was allocated with, if the pointer is required to be allocated. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes), if nonzero, needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with, if the pointer is required to be allocated. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`slice::from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc
54alloc::vec::Vecfrom_raw_parts_infunctionThis is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting
55alloc::vec::Vecset_lenfunction- `new_len` must be less than or equal to [`capacity()`]. - The elements at `old_len..new_len` must be initialized. [`capacity()`]: Vec::capacity
56core::allocAllocatortraitMemory blocks that are [*currently allocated*] by an allocator, must point to valid memory, and retain their validity until either: - the memory block is deallocated, or - the allocator is dropped. Copying, cloning, or moving the allocator must not invalidate memory blocks returned from it. A copied or cloned allocator must behave like the original allocator. A memory block which is [*currently allocated*] may be passed to any method of the allocator that accepts such an argument. [*currently allocated*]: #currently-allocated-memory
57core::alloc::globalGlobalAlloctraitThe `GlobalAlloc` trait is an `unsafe` trait for a number of reasons, and implementors must ensure that they adhere to these contracts: * It's undefined behavior if global allocators unwind. This restriction may be lifted in the future, but currently a panic from any of these functions may lead to memory unsafety. * `Layout` queries and calculations in general must be correct. Callers of this trait are allowed to rely on the contracts defined on each method, and implementors must ensure such contracts remain true. * You must not rely on allocations actually happening, even if there are explicit heap allocations in the source. The optimizer may detect unused allocations that it can either eliminate entirely or move to the stack and thus never invoke the allocator. The optimizer may further assume that allocation is infallible, so code that used to fail due to allocator failures may now suddenly work because the optimizer worked around the need for an allocation. More concretely, the following code example is unsound, irrespective of whether your custom allocator allows counting how many allocations have happened. ```rust,ignore (unsound and has placeholders) drop(Box::new(42)); let number_of_heap_allocs = /* call private allocator API */; unsafe { std::hint::assert_unchecked(number_of_heap_allocs > 0); } ``` Note that the optimizations mentioned above are not the only optimization that can be applied. You may generally not rely on heap allocations happening if they can be removed without changing program behavior. Whether allocations happen or not is not part of the program behavior, even if it could be detected via an allocator that tracks allocations by printing or otherwise having side effects.
58core::alloc::layout::Layoutfor_value_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable for the type `T` acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Layout::for_value`] on a reference to an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
59core::alloc::layout::Layoutfrom_size_align_uncheckedfunctionThis function is unsafe as it does not verify the preconditions from [`Layout::from_size_align`].
60core::alloc::layout::Layoutfrom_size_alignment_uncheckedfunctionThis function is unsafe as it does not verify the preconditions from [`Layout::from_size_alignment`].
61core::arrayas_ascii_uncheckedfunctionEvery byte in the array must be in `0..=127`, or else this is UB.
62core::array::iter::IntoIternew_uncheckedfunction- The `buffer[initialized]` elements must all be initialized. - The range must be canonical, with `initialized.start <= initialized.end`. - The range must be in-bounds for the buffer, with `initialized.end <= N`. (Like how indexing `[0][100..100]` fails despite the range being empty.) It's sound to have more elements initialized than mentioned, though that will most likely result in them being leaked.
63core::ascii::ascii_char::AsciiChardigit_uncheckedfunctionThis is immediate UB if called with `d > 64`. If `d >= 10` and `d <= 64`, this is allowed to return any value or panic. Notably, it should not be expected to return hex digits, or any other reasonable extension of the decimal digits. (This loose safety condition is intended to simplify soundness proofs when writing code using this method, since the implementation doesn't need something really specific, not to make those other arguments do something useful. It might be tightened before stabilization.)
64core::ascii::ascii_char::AsciiCharfrom_u8_uncheckedfunction`b` must be in `0..=127`, or else this is UB.
65core::cellCloneFromCelltraitImplementing this trait for a type is sound if and only if the following code is sound for T = that type. ``` #![feature(cell_get_cloned)]
66core::cell::RefCelltry_borrow_unguardedfunctionUnlike `RefCell::borrow`, this method is unsafe because it does not return a `Ref`, thus leaving the borrow flag untouched. Mutably borrowing the `RefCell` while the reference returned by this method is alive is undefined behavior.
67core::cell::UnsafeCellas_mut_uncheckedfunction- It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive. - Mutating the wrapped value through other means while the returned reference is alive is Undefined Behavior.
68core::cell::UnsafeCellas_ref_uncheckedfunction- It is Undefined Behavior to call this while any mutable reference to the wrapped value is alive. - Mutating the wrapped value while the returned reference is alive is Undefined Behavior.
69core::cell::UnsafeCellreplacefunctionThe caller must take care to avoid aliasing and data races. - It is Undefined Behavior to allow calls to race with any other access to the wrapped value. - It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive.
70core::charas_ascii_uncheckedfunctionThis char must be within the ASCII range, or else this is UB.
71core::charfrom_u32_uncheckedfunctionThis function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32
72core::cloneCloneToUninittraitImplementations must ensure that when `.clone_to_uninit(dest)` returns normally rather than panicking, it always leaves `*dest` initialized as a valid value of type `Self`.
73core::cloneTrivialClonetrait`Clone::clone` must be equivalent to copying the value, otherwise calling functions such as `slice::clone_from_slice` can have undefined behaviour.
74core::core_arch::aarch64::mte__arm_mte_create_random_tagfunction
75core::core_arch::aarch64::mte__arm_mte_exclude_tagfunction
76core::core_arch::aarch64::mte__arm_mte_get_tagfunction
77core::core_arch::aarch64::mte__arm_mte_increment_tagfunction
78core::core_arch::aarch64::mte__arm_mte_ptrdifffunction
79core::core_arch::aarch64::mte__arm_mte_set_tagfunction
80core::core_arch::aarch64::neonvld1_dup_f64function
81core::core_arch::aarch64::neonvld1_lane_f64function
82core::core_arch::aarch64::neonvld1q_dup_f64function
83core::core_arch::aarch64::neonvld1q_lane_f64function
84core::core_arch::aarch64::neon::generatedvld1_f16function* Neon intrinsic unsafe
85core::core_arch::aarch64::neon::generatedvld1_f32function* Neon intrinsic unsafe
86core::core_arch::aarch64::neon::generatedvld1_f64function* Neon intrinsic unsafe
87core::core_arch::aarch64::neon::generatedvld1_f64_x2function* Neon intrinsic unsafe
88core::core_arch::aarch64::neon::generatedvld1_f64_x3function* Neon intrinsic unsafe
89core::core_arch::aarch64::neon::generatedvld1_f64_x4function* Neon intrinsic unsafe
90core::core_arch::aarch64::neon::generatedvld1_p16function* Neon intrinsic unsafe
91core::core_arch::aarch64::neon::generatedvld1_p64function* Neon intrinsic unsafe
92core::core_arch::aarch64::neon::generatedvld1_p8function* Neon intrinsic unsafe
93core::core_arch::aarch64::neon::generatedvld1_s16function* Neon intrinsic unsafe
94core::core_arch::aarch64::neon::generatedvld1_s32function* Neon intrinsic unsafe
95core::core_arch::aarch64::neon::generatedvld1_s64function* Neon intrinsic unsafe
96core::core_arch::aarch64::neon::generatedvld1_s8function* Neon intrinsic unsafe
97core::core_arch::aarch64::neon::generatedvld1_u16function* Neon intrinsic unsafe
98core::core_arch::aarch64::neon::generatedvld1_u32function* Neon intrinsic unsafe
99core::core_arch::aarch64::neon::generatedvld1_u64function* Neon intrinsic unsafe
100core::core_arch::aarch64::neon::generatedvld1_u8function* Neon intrinsic unsafe
101core::core_arch::aarch64::neon::generatedvld1q_f16function* Neon intrinsic unsafe
102core::core_arch::aarch64::neon::generatedvld1q_f32function* Neon intrinsic unsafe
103core::core_arch::aarch64::neon::generatedvld1q_f64function* Neon intrinsic unsafe
104core::core_arch::aarch64::neon::generatedvld1q_f64_x2function* Neon intrinsic unsafe
105core::core_arch::aarch64::neon::generatedvld1q_f64_x3function* Neon intrinsic unsafe
106core::core_arch::aarch64::neon::generatedvld1q_f64_x4function* Neon intrinsic unsafe
107core::core_arch::aarch64::neon::generatedvld1q_p16function* Neon intrinsic unsafe
108core::core_arch::aarch64::neon::generatedvld1q_p64function* Neon intrinsic unsafe
109core::core_arch::aarch64::neon::generatedvld1q_p8function* Neon intrinsic unsafe
110core::core_arch::aarch64::neon::generatedvld1q_s16function* Neon intrinsic unsafe
111core::core_arch::aarch64::neon::generatedvld1q_s32function* Neon intrinsic unsafe
112core::core_arch::aarch64::neon::generatedvld1q_s64function* Neon intrinsic unsafe
113core::core_arch::aarch64::neon::generatedvld1q_s8function* Neon intrinsic unsafe
114core::core_arch::aarch64::neon::generatedvld1q_u16function* Neon intrinsic unsafe
115core::core_arch::aarch64::neon::generatedvld1q_u32function* Neon intrinsic unsafe
116core::core_arch::aarch64::neon::generatedvld1q_u64function* Neon intrinsic unsafe
117core::core_arch::aarch64::neon::generatedvld1q_u8function* Neon intrinsic unsafe
118core::core_arch::aarch64::neon::generatedvld2_dup_f64function* Neon intrinsic unsafe
119core::core_arch::aarch64::neon::generatedvld2_f64function* Neon intrinsic unsafe
120core::core_arch::aarch64::neon::generatedvld2_lane_f64function* Neon intrinsic unsafe
121core::core_arch::aarch64::neon::generatedvld2_lane_p64function* Neon intrinsic unsafe
122core::core_arch::aarch64::neon::generatedvld2_lane_s64function* Neon intrinsic unsafe
123core::core_arch::aarch64::neon::generatedvld2_lane_u64function* Neon intrinsic unsafe
124core::core_arch::aarch64::neon::generatedvld2q_dup_f64function* Neon intrinsic unsafe
125core::core_arch::aarch64::neon::generatedvld2q_dup_p64function* Neon intrinsic unsafe
126core::core_arch::aarch64::neon::generatedvld2q_dup_s64function* Neon intrinsic unsafe
127core::core_arch::aarch64::neon::generatedvld2q_dup_u64function* Neon intrinsic unsafe
128core::core_arch::aarch64::neon::generatedvld2q_f64function* Neon intrinsic unsafe
129core::core_arch::aarch64::neon::generatedvld2q_lane_f64function* Neon intrinsic unsafe
130core::core_arch::aarch64::neon::generatedvld2q_lane_p64function* Neon intrinsic unsafe
131core::core_arch::aarch64::neon::generatedvld2q_lane_p8function* Neon intrinsic unsafe
132core::core_arch::aarch64::neon::generatedvld2q_lane_s64function* Neon intrinsic unsafe
133core::core_arch::aarch64::neon::generatedvld2q_lane_s8function* Neon intrinsic unsafe
134core::core_arch::aarch64::neon::generatedvld2q_lane_u64function* Neon intrinsic unsafe
135core::core_arch::aarch64::neon::generatedvld2q_lane_u8function* Neon intrinsic unsafe
136core::core_arch::aarch64::neon::generatedvld2q_p64function* Neon intrinsic unsafe
137core::core_arch::aarch64::neon::generatedvld2q_s64function* Neon intrinsic unsafe
138core::core_arch::aarch64::neon::generatedvld2q_u64function* Neon intrinsic unsafe
139core::core_arch::aarch64::neon::generatedvld3_dup_f64function* Neon intrinsic unsafe
140core::core_arch::aarch64::neon::generatedvld3_f64function* Neon intrinsic unsafe
141core::core_arch::aarch64::neon::generatedvld3_lane_f64function* Neon intrinsic unsafe
142core::core_arch::aarch64::neon::generatedvld3_lane_p64function* Neon intrinsic unsafe
143core::core_arch::aarch64::neon::generatedvld3_lane_s64function* Neon intrinsic unsafe
144core::core_arch::aarch64::neon::generatedvld3_lane_u64function* Neon intrinsic unsafe
145core::core_arch::aarch64::neon::generatedvld3q_dup_f64function* Neon intrinsic unsafe
146core::core_arch::aarch64::neon::generatedvld3q_dup_p64function* Neon intrinsic unsafe
147core::core_arch::aarch64::neon::generatedvld3q_dup_s64function* Neon intrinsic unsafe
148core::core_arch::aarch64::neon::generatedvld3q_dup_u64function* Neon intrinsic unsafe
149core::core_arch::aarch64::neon::generatedvld3q_f64function* Neon intrinsic unsafe
150core::core_arch::aarch64::neon::generatedvld3q_lane_f64function* Neon intrinsic unsafe
151core::core_arch::aarch64::neon::generatedvld3q_lane_p64function* Neon intrinsic unsafe
152core::core_arch::aarch64::neon::generatedvld3q_lane_p8function* Neon intrinsic unsafe
153core::core_arch::aarch64::neon::generatedvld3q_lane_s64function* Neon intrinsic unsafe
154core::core_arch::aarch64::neon::generatedvld3q_lane_s8function* Neon intrinsic unsafe
155core::core_arch::aarch64::neon::generatedvld3q_lane_u64function* Neon intrinsic unsafe
156core::core_arch::aarch64::neon::generatedvld3q_lane_u8function* Neon intrinsic unsafe
157core::core_arch::aarch64::neon::generatedvld3q_p64function* Neon intrinsic unsafe
158core::core_arch::aarch64::neon::generatedvld3q_s64function* Neon intrinsic unsafe
159core::core_arch::aarch64::neon::generatedvld3q_u64function* Neon intrinsic unsafe
160core::core_arch::aarch64::neon::generatedvld4_dup_f64function* Neon intrinsic unsafe
161core::core_arch::aarch64::neon::generatedvld4_f64function* Neon intrinsic unsafe
162core::core_arch::aarch64::neon::generatedvld4_lane_f64function* Neon intrinsic unsafe
163core::core_arch::aarch64::neon::generatedvld4_lane_p64function* Neon intrinsic unsafe
164core::core_arch::aarch64::neon::generatedvld4_lane_s64function* Neon intrinsic unsafe
165core::core_arch::aarch64::neon::generatedvld4_lane_u64function* Neon intrinsic unsafe
166core::core_arch::aarch64::neon::generatedvld4q_dup_f64function* Neon intrinsic unsafe
167core::core_arch::aarch64::neon::generatedvld4q_dup_p64function* Neon intrinsic unsafe
168core::core_arch::aarch64::neon::generatedvld4q_dup_s64function* Neon intrinsic unsafe
169core::core_arch::aarch64::neon::generatedvld4q_dup_u64function* Neon intrinsic unsafe
170core::core_arch::aarch64::neon::generatedvld4q_f64function* Neon intrinsic unsafe
171core::core_arch::aarch64::neon::generatedvld4q_lane_f64function* Neon intrinsic unsafe
172core::core_arch::aarch64::neon::generatedvld4q_lane_p64function* Neon intrinsic unsafe
173core::core_arch::aarch64::neon::generatedvld4q_lane_p8function* Neon intrinsic unsafe
174core::core_arch::aarch64::neon::generatedvld4q_lane_s64function* Neon intrinsic unsafe
175core::core_arch::aarch64::neon::generatedvld4q_lane_s8function* Neon intrinsic unsafe
176core::core_arch::aarch64::neon::generatedvld4q_lane_u64function* Neon intrinsic unsafe
177core::core_arch::aarch64::neon::generatedvld4q_lane_u8function* Neon intrinsic unsafe
178core::core_arch::aarch64::neon::generatedvld4q_p64function* Neon intrinsic unsafe
179core::core_arch::aarch64::neon::generatedvld4q_s64function* Neon intrinsic unsafe
180core::core_arch::aarch64::neon::generatedvld4q_u64function* Neon intrinsic unsafe
181core::core_arch::aarch64::neon::generatedvldap1_lane_p64function* Neon intrinsic unsafe
182core::core_arch::aarch64::neon::generatedvldap1_lane_s64function* Neon intrinsic unsafe
183core::core_arch::aarch64::neon::generatedvldap1_lane_u64function* Neon intrinsic unsafe
184core::core_arch::aarch64::neon::generatedvldap1q_lane_f64function* Neon intrinsic unsafe
185core::core_arch::aarch64::neon::generatedvldap1q_lane_p64function* Neon intrinsic unsafe
186core::core_arch::aarch64::neon::generatedvldap1q_lane_s64function* Neon intrinsic unsafe
187core::core_arch::aarch64::neon::generatedvldap1q_lane_u64function* Neon intrinsic unsafe
188core::core_arch::aarch64::neon::generatedvluti2_lane_f16function* Neon intrinsic unsafe
189core::core_arch::aarch64::neon::generatedvluti2_lane_p16function* Neon intrinsic unsafe
190core::core_arch::aarch64::neon::generatedvluti2_lane_p8function* Neon intrinsic unsafe
191core::core_arch::aarch64::neon::generatedvluti2_lane_s16function* Neon intrinsic unsafe
192core::core_arch::aarch64::neon::generatedvluti2_lane_s8function* Neon intrinsic unsafe
193core::core_arch::aarch64::neon::generatedvluti2_lane_u16function* Neon intrinsic unsafe
194core::core_arch::aarch64::neon::generatedvluti2_lane_u8function* Neon intrinsic unsafe
195core::core_arch::aarch64::neon::generatedvluti2_laneq_f16function* Neon intrinsic unsafe
196core::core_arch::aarch64::neon::generatedvluti2_laneq_p16function* Neon intrinsic unsafe
197core::core_arch::aarch64::neon::generatedvluti2_laneq_p8function* Neon intrinsic unsafe
198core::core_arch::aarch64::neon::generatedvluti2_laneq_s16function* Neon intrinsic unsafe
199core::core_arch::aarch64::neon::generatedvluti2_laneq_s8function* Neon intrinsic unsafe
200core::core_arch::aarch64::neon::generatedvluti2_laneq_u16function* Neon intrinsic unsafe
201core::core_arch::aarch64::neon::generatedvluti2_laneq_u8function* Neon intrinsic unsafe
202core::core_arch::aarch64::neon::generatedvluti2q_lane_f16function* Neon intrinsic unsafe
203core::core_arch::aarch64::neon::generatedvluti2q_lane_p16function* Neon intrinsic unsafe
204core::core_arch::aarch64::neon::generatedvluti2q_lane_p8function* Neon intrinsic unsafe
205core::core_arch::aarch64::neon::generatedvluti2q_lane_s16function* Neon intrinsic unsafe
206core::core_arch::aarch64::neon::generatedvluti2q_lane_s8function* Neon intrinsic unsafe
207core::core_arch::aarch64::neon::generatedvluti2q_lane_u16function* Neon intrinsic unsafe
208core::core_arch::aarch64::neon::generatedvluti2q_lane_u8function* Neon intrinsic unsafe
209core::core_arch::aarch64::neon::generatedvluti2q_laneq_f16function* Neon intrinsic unsafe
210core::core_arch::aarch64::neon::generatedvluti2q_laneq_p16function* Neon intrinsic unsafe
211core::core_arch::aarch64::neon::generatedvluti2q_laneq_p8function* Neon intrinsic unsafe
212core::core_arch::aarch64::neon::generatedvluti2q_laneq_s16function* Neon intrinsic unsafe
213core::core_arch::aarch64::neon::generatedvluti2q_laneq_s8function* Neon intrinsic unsafe
214core::core_arch::aarch64::neon::generatedvluti2q_laneq_u16function* Neon intrinsic unsafe
215core::core_arch::aarch64::neon::generatedvluti2q_laneq_u8function* Neon intrinsic unsafe
216core::core_arch::aarch64::neon::generatedvluti4q_lane_f16_x2function* Neon intrinsic unsafe
217core::core_arch::aarch64::neon::generatedvluti4q_lane_p16_x2function* Neon intrinsic unsafe
218core::core_arch::aarch64::neon::generatedvluti4q_lane_p8function* Neon intrinsic unsafe
219core::core_arch::aarch64::neon::generatedvluti4q_lane_s16_x2function* Neon intrinsic unsafe
220core::core_arch::aarch64::neon::generatedvluti4q_lane_s8function* Neon intrinsic unsafe
221core::core_arch::aarch64::neon::generatedvluti4q_lane_u16_x2function* Neon intrinsic unsafe
222core::core_arch::aarch64::neon::generatedvluti4q_lane_u8function* Neon intrinsic unsafe
223core::core_arch::aarch64::neon::generatedvluti4q_laneq_f16_x2function* Neon intrinsic unsafe
224core::core_arch::aarch64::neon::generatedvluti4q_laneq_p16_x2function* Neon intrinsic unsafe
225core::core_arch::aarch64::neon::generatedvluti4q_laneq_p8function* Neon intrinsic unsafe
226core::core_arch::aarch64::neon::generatedvluti4q_laneq_s16_x2function* Neon intrinsic unsafe
227core::core_arch::aarch64::neon::generatedvluti4q_laneq_s8function* Neon intrinsic unsafe
228core::core_arch::aarch64::neon::generatedvluti4q_laneq_u16_x2function* Neon intrinsic unsafe
229core::core_arch::aarch64::neon::generatedvluti4q_laneq_u8function* Neon intrinsic unsafe
230core::core_arch::aarch64::neon::generatedvst1_f16function* Neon intrinsic unsafe
231core::core_arch::aarch64::neon::generatedvst1_f32function* Neon intrinsic unsafe
232core::core_arch::aarch64::neon::generatedvst1_f64function* Neon intrinsic unsafe
233core::core_arch::aarch64::neon::generatedvst1_f64_x2function* Neon intrinsic unsafe
234core::core_arch::aarch64::neon::generatedvst1_f64_x3function* Neon intrinsic unsafe
235core::core_arch::aarch64::neon::generatedvst1_f64_x4function* Neon intrinsic unsafe
236core::core_arch::aarch64::neon::generatedvst1_lane_f64function* Neon intrinsic unsafe
237core::core_arch::aarch64::neon::generatedvst1_p16function* Neon intrinsic unsafe
238core::core_arch::aarch64::neon::generatedvst1_p64function* Neon intrinsic unsafe
239core::core_arch::aarch64::neon::generatedvst1_p8function* Neon intrinsic unsafe
240core::core_arch::aarch64::neon::generatedvst1_s16function* Neon intrinsic unsafe
241core::core_arch::aarch64::neon::generatedvst1_s32function* Neon intrinsic unsafe
242core::core_arch::aarch64::neon::generatedvst1_s64function* Neon intrinsic unsafe
243core::core_arch::aarch64::neon::generatedvst1_s8function* Neon intrinsic unsafe
244core::core_arch::aarch64::neon::generatedvst1_u16function* Neon intrinsic unsafe
245core::core_arch::aarch64::neon::generatedvst1_u32function* Neon intrinsic unsafe
246core::core_arch::aarch64::neon::generatedvst1_u64function* Neon intrinsic unsafe
247core::core_arch::aarch64::neon::generatedvst1_u8function* Neon intrinsic unsafe
248core::core_arch::aarch64::neon::generatedvst1q_f16function* Neon intrinsic unsafe
249core::core_arch::aarch64::neon::generatedvst1q_f32function* Neon intrinsic unsafe
250core::core_arch::aarch64::neon::generatedvst1q_f64function* Neon intrinsic unsafe
251core::core_arch::aarch64::neon::generatedvst1q_f64_x2function* Neon intrinsic unsafe
252core::core_arch::aarch64::neon::generatedvst1q_f64_x3function* Neon intrinsic unsafe
253core::core_arch::aarch64::neon::generatedvst1q_f64_x4function* Neon intrinsic unsafe
254core::core_arch::aarch64::neon::generatedvst1q_lane_f64function* Neon intrinsic unsafe
255core::core_arch::aarch64::neon::generatedvst1q_p16function* Neon intrinsic unsafe
256core::core_arch::aarch64::neon::generatedvst1q_p64function* Neon intrinsic unsafe
257core::core_arch::aarch64::neon::generatedvst1q_p8function* Neon intrinsic unsafe
258core::core_arch::aarch64::neon::generatedvst1q_s16function* Neon intrinsic unsafe
259core::core_arch::aarch64::neon::generatedvst1q_s32function* Neon intrinsic unsafe
260core::core_arch::aarch64::neon::generatedvst1q_s64function* Neon intrinsic unsafe
261core::core_arch::aarch64::neon::generatedvst1q_s8function* Neon intrinsic unsafe
262core::core_arch::aarch64::neon::generatedvst1q_u16function* Neon intrinsic unsafe
263core::core_arch::aarch64::neon::generatedvst1q_u32function* Neon intrinsic unsafe
264core::core_arch::aarch64::neon::generatedvst1q_u64function* Neon intrinsic unsafe
265core::core_arch::aarch64::neon::generatedvst1q_u8function* Neon intrinsic unsafe
266core::core_arch::aarch64::neon::generatedvst2_f64function* Neon intrinsic unsafe
267core::core_arch::aarch64::neon::generatedvst2_lane_f64function* Neon intrinsic unsafe
268core::core_arch::aarch64::neon::generatedvst2_lane_p64function* Neon intrinsic unsafe
269core::core_arch::aarch64::neon::generatedvst2_lane_s64function* Neon intrinsic unsafe
270core::core_arch::aarch64::neon::generatedvst2_lane_u64function* Neon intrinsic unsafe
271core::core_arch::aarch64::neon::generatedvst2q_f64function* Neon intrinsic unsafe
272core::core_arch::aarch64::neon::generatedvst2q_lane_f64function* Neon intrinsic unsafe
273core::core_arch::aarch64::neon::generatedvst2q_lane_p64function* Neon intrinsic unsafe
274core::core_arch::aarch64::neon::generatedvst2q_lane_p8function* Neon intrinsic unsafe
275core::core_arch::aarch64::neon::generatedvst2q_lane_s64function* Neon intrinsic unsafe
276core::core_arch::aarch64::neon::generatedvst2q_lane_s8function* Neon intrinsic unsafe
277core::core_arch::aarch64::neon::generatedvst2q_lane_u64function* Neon intrinsic unsafe
278core::core_arch::aarch64::neon::generatedvst2q_lane_u8function* Neon intrinsic unsafe
279core::core_arch::aarch64::neon::generatedvst2q_p64function* Neon intrinsic unsafe
280core::core_arch::aarch64::neon::generatedvst2q_s64function* Neon intrinsic unsafe
281core::core_arch::aarch64::neon::generatedvst2q_u64function* Neon intrinsic unsafe
282core::core_arch::aarch64::neon::generatedvst3_f64function* Neon intrinsic unsafe
283core::core_arch::aarch64::neon::generatedvst3_lane_f64function* Neon intrinsic unsafe
284core::core_arch::aarch64::neon::generatedvst3_lane_p64function* Neon intrinsic unsafe
285core::core_arch::aarch64::neon::generatedvst3_lane_s64function* Neon intrinsic unsafe
286core::core_arch::aarch64::neon::generatedvst3_lane_u64function* Neon intrinsic unsafe
287core::core_arch::aarch64::neon::generatedvst3q_f64function* Neon intrinsic unsafe
288core::core_arch::aarch64::neon::generatedvst3q_lane_f64function* Neon intrinsic unsafe
289core::core_arch::aarch64::neon::generatedvst3q_lane_p64function* Neon intrinsic unsafe
290core::core_arch::aarch64::neon::generatedvst3q_lane_p8function* Neon intrinsic unsafe
291core::core_arch::aarch64::neon::generatedvst3q_lane_s64function* Neon intrinsic unsafe
292core::core_arch::aarch64::neon::generatedvst3q_lane_s8function* Neon intrinsic unsafe
293core::core_arch::aarch64::neon::generatedvst3q_lane_u64function* Neon intrinsic unsafe
294core::core_arch::aarch64::neon::generatedvst3q_lane_u8function* Neon intrinsic unsafe
295core::core_arch::aarch64::neon::generatedvst3q_p64function* Neon intrinsic unsafe
296core::core_arch::aarch64::neon::generatedvst3q_s64function* Neon intrinsic unsafe
297core::core_arch::aarch64::neon::generatedvst3q_u64function* Neon intrinsic unsafe
298core::core_arch::aarch64::neon::generatedvst4_f64function* Neon intrinsic unsafe
299core::core_arch::aarch64::neon::generatedvst4_lane_f64function* Neon intrinsic unsafe
300core::core_arch::aarch64::neon::generatedvst4_lane_p64function* Neon intrinsic unsafe
301core::core_arch::aarch64::neon::generatedvst4_lane_s64function* Neon intrinsic unsafe
302core::core_arch::aarch64::neon::generatedvst4_lane_u64function* Neon intrinsic unsafe
303core::core_arch::aarch64::neon::generatedvst4q_f64function* Neon intrinsic unsafe
304core::core_arch::aarch64::neon::generatedvst4q_lane_f64function* Neon intrinsic unsafe
305core::core_arch::aarch64::neon::generatedvst4q_lane_p64function* Neon intrinsic unsafe
306core::core_arch::aarch64::neon::generatedvst4q_lane_p8function* Neon intrinsic unsafe
307core::core_arch::aarch64::neon::generatedvst4q_lane_s64function* Neon intrinsic unsafe
308core::core_arch::aarch64::neon::generatedvst4q_lane_s8function* Neon intrinsic unsafe
309core::core_arch::aarch64::neon::generatedvst4q_lane_u64function* Neon intrinsic unsafe
310core::core_arch::aarch64::neon::generatedvst4q_lane_u8function* Neon intrinsic unsafe
311core::core_arch::aarch64::neon::generatedvst4q_p64function* Neon intrinsic unsafe
312core::core_arch::aarch64::neon::generatedvst4q_s64function* Neon intrinsic unsafe
313core::core_arch::aarch64::neon::generatedvst4q_u64function* Neon intrinsic unsafe
314core::core_arch::aarch64::prefetch_prefetchfunction
315core::core_arch::amdgpuds_bpermutefunction
316core::core_arch::amdgpuds_permutefunction
317core::core_arch::amdgpupermfunction
318core::core_arch::amdgpupermlane16_swapfunction
319core::core_arch::amdgpupermlane16_u32function
320core::core_arch::amdgpupermlane16_varfunction
321core::core_arch::amdgpupermlane32_swapfunction
322core::core_arch::amdgpupermlane64_u32function
323core::core_arch::amdgpupermlanex16_u32function
324core::core_arch::amdgpupermlanex16_varfunction
325core::core_arch::amdgpureadlane_u32function
326core::core_arch::amdgpureadlane_u64function
327core::core_arch::amdgpus_barrier_signalfunction
328core::core_arch::amdgpus_barrier_signal_isfirstfunction
329core::core_arch::amdgpus_barrier_waitfunction
330core::core_arch::amdgpus_get_barrier_statefunction
331core::core_arch::amdgpusched_barrierfunction
332core::core_arch::amdgpusched_group_barrierfunction
333core::core_arch::amdgpuupdate_dppfunction
334core::core_arch::amdgpuwritelane_u32function
335core::core_arch::amdgpuwritelane_u64function
336core::core_arch::arm::dsp__qaddfunction
337core::core_arch::arm::dsp__qdblfunction
338core::core_arch::arm::dsp__qsubfunction
339core::core_arch::arm::dsp__smlabbfunction
340core::core_arch::arm::dsp__smlabtfunction
341core::core_arch::arm::dsp__smlatbfunction
342core::core_arch::arm::dsp__smlattfunction
343core::core_arch::arm::dsp__smlawbfunction
344core::core_arch::arm::dsp__smlawtfunction
345core::core_arch::arm::dsp__smulbbfunction
346core::core_arch::arm::dsp__smulbtfunction
347core::core_arch::arm::dsp__smultbfunction
348core::core_arch::arm::dsp__smulttfunction
349core::core_arch::arm::dsp__smulwbfunction
350core::core_arch::arm::dsp__smulwtfunction
351core::core_arch::arm::sat__ssatfunction
352core::core_arch::arm::sat__usatfunction
353core::core_arch::arm::simd32__qadd16function
354core::core_arch::arm::simd32__qadd8function
355core::core_arch::arm::simd32__qasxfunction
356core::core_arch::arm::simd32__qsaxfunction
357core::core_arch::arm::simd32__qsub16function
358core::core_arch::arm::simd32__qsub8function
359core::core_arch::arm::simd32__sadd16function
360core::core_arch::arm::simd32__sadd8function
361core::core_arch::arm::simd32__sasxfunction
362core::core_arch::arm::simd32__selfunction
363core::core_arch::arm::simd32__shadd16function
364core::core_arch::arm::simd32__shadd8function
365core::core_arch::arm::simd32__shsub16function
366core::core_arch::arm::simd32__shsub8function
367core::core_arch::arm::simd32__smladfunction
368core::core_arch::arm::simd32__smlsdfunction
369core::core_arch::arm::simd32__smuadfunction
370core::core_arch::arm::simd32__smuadxfunction
371core::core_arch::arm::simd32__smusdfunction
372core::core_arch::arm::simd32__smusdxfunction
373core::core_arch::arm::simd32__ssub8function
374core::core_arch::arm::simd32__usad8function
375core::core_arch::arm::simd32__usada8function
376core::core_arch::arm::simd32__usub8function
377core::core_arch::arm_shared::barrier__dmbfunction
378core::core_arch::arm_shared::barrier__dsbfunction
379core::core_arch::arm_shared::barrier__isbfunction
380core::core_arch::arm_shared::hints__nopfunction
381core::core_arch::arm_shared::hints__sevfunction
382core::core_arch::arm_shared::hints__sevlfunction
383core::core_arch::arm_shared::hints__wfefunction
384core::core_arch::arm_shared::hints__wfifunction
385core::core_arch::arm_shared::hints__yieldfunction
386core::core_arch::arm_shared::neon::generatedvext_s64function* Neon intrinsic unsafe
387core::core_arch::arm_shared::neon::generatedvext_u64function* Neon intrinsic unsafe
388core::core_arch::arm_shared::neon::generatedvld1_dup_f16function* Neon intrinsic unsafe
389core::core_arch::arm_shared::neon::generatedvld1_dup_f32function* Neon intrinsic unsafe
390core::core_arch::arm_shared::neon::generatedvld1_dup_p16function* Neon intrinsic unsafe
391core::core_arch::arm_shared::neon::generatedvld1_dup_p64function* Neon intrinsic unsafe
392core::core_arch::arm_shared::neon::generatedvld1_dup_p8function* Neon intrinsic unsafe
393core::core_arch::arm_shared::neon::generatedvld1_dup_s16function* Neon intrinsic unsafe
394core::core_arch::arm_shared::neon::generatedvld1_dup_s32function* Neon intrinsic unsafe
395core::core_arch::arm_shared::neon::generatedvld1_dup_s64function* Neon intrinsic unsafe
396core::core_arch::arm_shared::neon::generatedvld1_dup_s8function* Neon intrinsic unsafe
397core::core_arch::arm_shared::neon::generatedvld1_dup_u16function* Neon intrinsic unsafe
398core::core_arch::arm_shared::neon::generatedvld1_dup_u32function* Neon intrinsic unsafe
399core::core_arch::arm_shared::neon::generatedvld1_dup_u64function* Neon intrinsic unsafe
400core::core_arch::arm_shared::neon::generatedvld1_dup_u8function* Neon intrinsic unsafe
401core::core_arch::arm_shared::neon::generatedvld1_f16_x2function* Neon intrinsic unsafe
402core::core_arch::arm_shared::neon::generatedvld1_f16_x3function* Neon intrinsic unsafe
403core::core_arch::arm_shared::neon::generatedvld1_f16_x4function* Neon intrinsic unsafe
404core::core_arch::arm_shared::neon::generatedvld1_f32_x2function* Neon intrinsic unsafe
405core::core_arch::arm_shared::neon::generatedvld1_f32_x3function* Neon intrinsic unsafe
406core::core_arch::arm_shared::neon::generatedvld1_f32_x4function* Neon intrinsic unsafe
407core::core_arch::arm_shared::neon::generatedvld1_lane_f16function* Neon intrinsic unsafe
408core::core_arch::arm_shared::neon::generatedvld1_lane_f32function* Neon intrinsic unsafe
409core::core_arch::arm_shared::neon::generatedvld1_lane_p16function* Neon intrinsic unsafe
410core::core_arch::arm_shared::neon::generatedvld1_lane_p64function* Neon intrinsic unsafe
411core::core_arch::arm_shared::neon::generatedvld1_lane_p8function* Neon intrinsic unsafe
412core::core_arch::arm_shared::neon::generatedvld1_lane_s16function* Neon intrinsic unsafe
413core::core_arch::arm_shared::neon::generatedvld1_lane_s32function* Neon intrinsic unsafe
414core::core_arch::arm_shared::neon::generatedvld1_lane_s64function* Neon intrinsic unsafe
415core::core_arch::arm_shared::neon::generatedvld1_lane_s8function* Neon intrinsic unsafe
416core::core_arch::arm_shared::neon::generatedvld1_lane_u16function* Neon intrinsic unsafe
417core::core_arch::arm_shared::neon::generatedvld1_lane_u32function* Neon intrinsic unsafe
418core::core_arch::arm_shared::neon::generatedvld1_lane_u64function* Neon intrinsic unsafe
419core::core_arch::arm_shared::neon::generatedvld1_lane_u8function* Neon intrinsic unsafe
420core::core_arch::arm_shared::neon::generatedvld1_p16_x2function* Neon intrinsic unsafe
421core::core_arch::arm_shared::neon::generatedvld1_p16_x3function* Neon intrinsic unsafe
422core::core_arch::arm_shared::neon::generatedvld1_p16_x4function* Neon intrinsic unsafe
423core::core_arch::arm_shared::neon::generatedvld1_p64_x2function* Neon intrinsic unsafe
424core::core_arch::arm_shared::neon::generatedvld1_p64_x3function* Neon intrinsic unsafe
425core::core_arch::arm_shared::neon::generatedvld1_p64_x4function* Neon intrinsic unsafe
426core::core_arch::arm_shared::neon::generatedvld1_p8_x2function* Neon intrinsic unsafe
427core::core_arch::arm_shared::neon::generatedvld1_p8_x3function* Neon intrinsic unsafe
428core::core_arch::arm_shared::neon::generatedvld1_p8_x4function* Neon intrinsic unsafe
429core::core_arch::arm_shared::neon::generatedvld1_s16_x2function* Neon intrinsic unsafe
430core::core_arch::arm_shared::neon::generatedvld1_s16_x3function* Neon intrinsic unsafe
431core::core_arch::arm_shared::neon::generatedvld1_s16_x4function* Neon intrinsic unsafe
432core::core_arch::arm_shared::neon::generatedvld1_s32_x2function* Neon intrinsic unsafe
433core::core_arch::arm_shared::neon::generatedvld1_s32_x3function* Neon intrinsic unsafe
434core::core_arch::arm_shared::neon::generatedvld1_s32_x4function* Neon intrinsic unsafe
435core::core_arch::arm_shared::neon::generatedvld1_s64_x2function* Neon intrinsic unsafe
436core::core_arch::arm_shared::neon::generatedvld1_s64_x3function* Neon intrinsic unsafe
437core::core_arch::arm_shared::neon::generatedvld1_s64_x4function* Neon intrinsic unsafe
438core::core_arch::arm_shared::neon::generatedvld1_s8_x2function* Neon intrinsic unsafe
439core::core_arch::arm_shared::neon::generatedvld1_s8_x3function* Neon intrinsic unsafe
440core::core_arch::arm_shared::neon::generatedvld1_s8_x4function* Neon intrinsic unsafe
441core::core_arch::arm_shared::neon::generatedvld1_u16_x2function* Neon intrinsic unsafe
442core::core_arch::arm_shared::neon::generatedvld1_u16_x3function* Neon intrinsic unsafe
443core::core_arch::arm_shared::neon::generatedvld1_u16_x4function* Neon intrinsic unsafe
444core::core_arch::arm_shared::neon::generatedvld1_u32_x2function* Neon intrinsic unsafe
445core::core_arch::arm_shared::neon::generatedvld1_u32_x3function* Neon intrinsic unsafe
446core::core_arch::arm_shared::neon::generatedvld1_u32_x4function* Neon intrinsic unsafe
447core::core_arch::arm_shared::neon::generatedvld1_u64_x2function* Neon intrinsic unsafe
448core::core_arch::arm_shared::neon::generatedvld1_u64_x3function* Neon intrinsic unsafe
449core::core_arch::arm_shared::neon::generatedvld1_u64_x4function* Neon intrinsic unsafe
450core::core_arch::arm_shared::neon::generatedvld1_u8_x2function* Neon intrinsic unsafe
451core::core_arch::arm_shared::neon::generatedvld1_u8_x3function* Neon intrinsic unsafe
452core::core_arch::arm_shared::neon::generatedvld1_u8_x4function* Neon intrinsic unsafe
453core::core_arch::arm_shared::neon::generatedvld1q_dup_f16function* Neon intrinsic unsafe
454core::core_arch::arm_shared::neon::generatedvld1q_dup_f32function* Neon intrinsic unsafe
455core::core_arch::arm_shared::neon::generatedvld1q_dup_p16function* Neon intrinsic unsafe
456core::core_arch::arm_shared::neon::generatedvld1q_dup_p64function* Neon intrinsic unsafe
457core::core_arch::arm_shared::neon::generatedvld1q_dup_p8function* Neon intrinsic unsafe
458core::core_arch::arm_shared::neon::generatedvld1q_dup_s16function* Neon intrinsic unsafe
459core::core_arch::arm_shared::neon::generatedvld1q_dup_s32function* Neon intrinsic unsafe
460core::core_arch::arm_shared::neon::generatedvld1q_dup_s64function* Neon intrinsic unsafe
461core::core_arch::arm_shared::neon::generatedvld1q_dup_s8function* Neon intrinsic unsafe
462core::core_arch::arm_shared::neon::generatedvld1q_dup_u16function* Neon intrinsic unsafe
463core::core_arch::arm_shared::neon::generatedvld1q_dup_u32function* Neon intrinsic unsafe
464core::core_arch::arm_shared::neon::generatedvld1q_dup_u64function* Neon intrinsic unsafe
465core::core_arch::arm_shared::neon::generatedvld1q_dup_u8function* Neon intrinsic unsafe
466core::core_arch::arm_shared::neon::generatedvld1q_f16_x2function* Neon intrinsic unsafe
467core::core_arch::arm_shared::neon::generatedvld1q_f16_x3function* Neon intrinsic unsafe
468core::core_arch::arm_shared::neon::generatedvld1q_f16_x4function* Neon intrinsic unsafe
469core::core_arch::arm_shared::neon::generatedvld1q_f32_x2function* Neon intrinsic unsafe
470core::core_arch::arm_shared::neon::generatedvld1q_f32_x3function* Neon intrinsic unsafe
471core::core_arch::arm_shared::neon::generatedvld1q_f32_x4function* Neon intrinsic unsafe
472core::core_arch::arm_shared::neon::generatedvld1q_lane_f16function* Neon intrinsic unsafe
473core::core_arch::arm_shared::neon::generatedvld1q_lane_f32function* Neon intrinsic unsafe
474core::core_arch::arm_shared::neon::generatedvld1q_lane_p16function* Neon intrinsic unsafe
475core::core_arch::arm_shared::neon::generatedvld1q_lane_p64function* Neon intrinsic unsafe
476core::core_arch::arm_shared::neon::generatedvld1q_lane_p8function* Neon intrinsic unsafe
477core::core_arch::arm_shared::neon::generatedvld1q_lane_s16function* Neon intrinsic unsafe
478core::core_arch::arm_shared::neon::generatedvld1q_lane_s32function* Neon intrinsic unsafe
479core::core_arch::arm_shared::neon::generatedvld1q_lane_s64function* Neon intrinsic unsafe
480core::core_arch::arm_shared::neon::generatedvld1q_lane_s8function* Neon intrinsic unsafe
481core::core_arch::arm_shared::neon::generatedvld1q_lane_u16function* Neon intrinsic unsafe
482core::core_arch::arm_shared::neon::generatedvld1q_lane_u32function* Neon intrinsic unsafe
483core::core_arch::arm_shared::neon::generatedvld1q_lane_u64function* Neon intrinsic unsafe
484core::core_arch::arm_shared::neon::generatedvld1q_lane_u8function* Neon intrinsic unsafe
485core::core_arch::arm_shared::neon::generatedvld1q_p16_x2function* Neon intrinsic unsafe
486core::core_arch::arm_shared::neon::generatedvld1q_p16_x3function* Neon intrinsic unsafe
487core::core_arch::arm_shared::neon::generatedvld1q_p16_x4function* Neon intrinsic unsafe
488core::core_arch::arm_shared::neon::generatedvld1q_p64_x2function* Neon intrinsic unsafe
489core::core_arch::arm_shared::neon::generatedvld1q_p64_x3function* Neon intrinsic unsafe
490core::core_arch::arm_shared::neon::generatedvld1q_p64_x4function* Neon intrinsic unsafe
491core::core_arch::arm_shared::neon::generatedvld1q_p8_x2function* Neon intrinsic unsafe
492core::core_arch::arm_shared::neon::generatedvld1q_p8_x3function* Neon intrinsic unsafe
493core::core_arch::arm_shared::neon::generatedvld1q_p8_x4function* Neon intrinsic unsafe
494core::core_arch::arm_shared::neon::generatedvld1q_s16_x2function* Neon intrinsic unsafe
495core::core_arch::arm_shared::neon::generatedvld1q_s16_x3function* Neon intrinsic unsafe
496core::core_arch::arm_shared::neon::generatedvld1q_s16_x4function* Neon intrinsic unsafe
497core::core_arch::arm_shared::neon::generatedvld1q_s32_x2function* Neon intrinsic unsafe
498core::core_arch::arm_shared::neon::generatedvld1q_s32_x3function* Neon intrinsic unsafe
499core::core_arch::arm_shared::neon::generatedvld1q_s32_x4function* Neon intrinsic unsafe
500core::core_arch::arm_shared::neon::generatedvld1q_s64_x2function* Neon intrinsic unsafe
501core::core_arch::arm_shared::neon::generatedvld1q_s64_x3function* Neon intrinsic unsafe
502core::core_arch::arm_shared::neon::generatedvld1q_s64_x4function* Neon intrinsic unsafe
503core::core_arch::arm_shared::neon::generatedvld1q_s8_x2function* Neon intrinsic unsafe
504core::core_arch::arm_shared::neon::generatedvld1q_s8_x3function* Neon intrinsic unsafe
505core::core_arch::arm_shared::neon::generatedvld1q_s8_x4function* Neon intrinsic unsafe
506core::core_arch::arm_shared::neon::generatedvld1q_u16_x2function* Neon intrinsic unsafe
507core::core_arch::arm_shared::neon::generatedvld1q_u16_x3function* Neon intrinsic unsafe
508core::core_arch::arm_shared::neon::generatedvld1q_u16_x4function* Neon intrinsic unsafe
509core::core_arch::arm_shared::neon::generatedvld1q_u32_x2function* Neon intrinsic unsafe
510core::core_arch::arm_shared::neon::generatedvld1q_u32_x3function* Neon intrinsic unsafe
511core::core_arch::arm_shared::neon::generatedvld1q_u32_x4function* Neon intrinsic unsafe
512core::core_arch::arm_shared::neon::generatedvld1q_u64_x2function* Neon intrinsic unsafe
513core::core_arch::arm_shared::neon::generatedvld1q_u64_x3function* Neon intrinsic unsafe
514core::core_arch::arm_shared::neon::generatedvld1q_u64_x4function* Neon intrinsic unsafe
515core::core_arch::arm_shared::neon::generatedvld1q_u8_x2function* Neon intrinsic unsafe
516core::core_arch::arm_shared::neon::generatedvld1q_u8_x3function* Neon intrinsic unsafe
517core::core_arch::arm_shared::neon::generatedvld1q_u8_x4function* Neon intrinsic unsafe
518core::core_arch::arm_shared::neon::generatedvld2_dup_f16function* Neon intrinsic unsafe
519core::core_arch::arm_shared::neon::generatedvld2_dup_f32function* Neon intrinsic unsafe
520core::core_arch::arm_shared::neon::generatedvld2_dup_p16function* Neon intrinsic unsafe
521core::core_arch::arm_shared::neon::generatedvld2_dup_p64function* Neon intrinsic unsafe
522core::core_arch::arm_shared::neon::generatedvld2_dup_p8function* Neon intrinsic unsafe
523core::core_arch::arm_shared::neon::generatedvld2_dup_s16function* Neon intrinsic unsafe
524core::core_arch::arm_shared::neon::generatedvld2_dup_s32function* Neon intrinsic unsafe
525core::core_arch::arm_shared::neon::generatedvld2_dup_s64function* Neon intrinsic unsafe
526core::core_arch::arm_shared::neon::generatedvld2_dup_s8function* Neon intrinsic unsafe
527core::core_arch::arm_shared::neon::generatedvld2_dup_u16function* Neon intrinsic unsafe
528core::core_arch::arm_shared::neon::generatedvld2_dup_u32function* Neon intrinsic unsafe
529core::core_arch::arm_shared::neon::generatedvld2_dup_u64function* Neon intrinsic unsafe
530core::core_arch::arm_shared::neon::generatedvld2_dup_u8function* Neon intrinsic unsafe
531core::core_arch::arm_shared::neon::generatedvld2_f16function* Neon intrinsic unsafe
532core::core_arch::arm_shared::neon::generatedvld2_f32function* Neon intrinsic unsafe
533core::core_arch::arm_shared::neon::generatedvld2_lane_f16function* Neon intrinsic unsafe
534core::core_arch::arm_shared::neon::generatedvld2_lane_f32function* Neon intrinsic unsafe
535core::core_arch::arm_shared::neon::generatedvld2_lane_p16function* Neon intrinsic unsafe
536core::core_arch::arm_shared::neon::generatedvld2_lane_p8function* Neon intrinsic unsafe
537core::core_arch::arm_shared::neon::generatedvld2_lane_s16function* Neon intrinsic unsafe
538core::core_arch::arm_shared::neon::generatedvld2_lane_s32function* Neon intrinsic unsafe
539core::core_arch::arm_shared::neon::generatedvld2_lane_s8function* Neon intrinsic unsafe
540core::core_arch::arm_shared::neon::generatedvld2_lane_u16function* Neon intrinsic unsafe
541core::core_arch::arm_shared::neon::generatedvld2_lane_u32function* Neon intrinsic unsafe
542core::core_arch::arm_shared::neon::generatedvld2_lane_u8function* Neon intrinsic unsafe
543core::core_arch::arm_shared::neon::generatedvld2_p16function* Neon intrinsic unsafe
544core::core_arch::arm_shared::neon::generatedvld2_p64function* Neon intrinsic unsafe
545core::core_arch::arm_shared::neon::generatedvld2_p8function* Neon intrinsic unsafe
546core::core_arch::arm_shared::neon::generatedvld2_s16function* Neon intrinsic unsafe
547core::core_arch::arm_shared::neon::generatedvld2_s32function* Neon intrinsic unsafe
548core::core_arch::arm_shared::neon::generatedvld2_s64function* Neon intrinsic unsafe
549core::core_arch::arm_shared::neon::generatedvld2_s8function* Neon intrinsic unsafe
550core::core_arch::arm_shared::neon::generatedvld2_u16function* Neon intrinsic unsafe
551core::core_arch::arm_shared::neon::generatedvld2_u32function* Neon intrinsic unsafe
552core::core_arch::arm_shared::neon::generatedvld2_u64function* Neon intrinsic unsafe
553core::core_arch::arm_shared::neon::generatedvld2_u8function* Neon intrinsic unsafe
554core::core_arch::arm_shared::neon::generatedvld2q_dup_f16function* Neon intrinsic unsafe
555core::core_arch::arm_shared::neon::generatedvld2q_dup_f32function* Neon intrinsic unsafe
556core::core_arch::arm_shared::neon::generatedvld2q_dup_p16function* Neon intrinsic unsafe
557core::core_arch::arm_shared::neon::generatedvld2q_dup_p8function* Neon intrinsic unsafe
558core::core_arch::arm_shared::neon::generatedvld2q_dup_s16function* Neon intrinsic unsafe
559core::core_arch::arm_shared::neon::generatedvld2q_dup_s32function* Neon intrinsic unsafe
560core::core_arch::arm_shared::neon::generatedvld2q_dup_s8function* Neon intrinsic unsafe
561core::core_arch::arm_shared::neon::generatedvld2q_dup_u16function* Neon intrinsic unsafe
562core::core_arch::arm_shared::neon::generatedvld2q_dup_u32function* Neon intrinsic unsafe
563core::core_arch::arm_shared::neon::generatedvld2q_dup_u8function* Neon intrinsic unsafe
564core::core_arch::arm_shared::neon::generatedvld2q_f16function* Neon intrinsic unsafe
565core::core_arch::arm_shared::neon::generatedvld2q_f32function* Neon intrinsic unsafe
566core::core_arch::arm_shared::neon::generatedvld2q_lane_f16function* Neon intrinsic unsafe
567core::core_arch::arm_shared::neon::generatedvld2q_lane_f32function* Neon intrinsic unsafe
568core::core_arch::arm_shared::neon::generatedvld2q_lane_p16function* Neon intrinsic unsafe
569core::core_arch::arm_shared::neon::generatedvld2q_lane_s16function* Neon intrinsic unsafe
570core::core_arch::arm_shared::neon::generatedvld2q_lane_s32function* Neon intrinsic unsafe
571core::core_arch::arm_shared::neon::generatedvld2q_lane_u16function* Neon intrinsic unsafe
572core::core_arch::arm_shared::neon::generatedvld2q_lane_u32function* Neon intrinsic unsafe
573core::core_arch::arm_shared::neon::generatedvld2q_p16function* Neon intrinsic unsafe
574core::core_arch::arm_shared::neon::generatedvld2q_p8function* Neon intrinsic unsafe
575core::core_arch::arm_shared::neon::generatedvld2q_s16function* Neon intrinsic unsafe
576core::core_arch::arm_shared::neon::generatedvld2q_s32function* Neon intrinsic unsafe
577core::core_arch::arm_shared::neon::generatedvld2q_s8function* Neon intrinsic unsafe
578core::core_arch::arm_shared::neon::generatedvld2q_u16function* Neon intrinsic unsafe
579core::core_arch::arm_shared::neon::generatedvld2q_u32function* Neon intrinsic unsafe
580core::core_arch::arm_shared::neon::generatedvld2q_u8function* Neon intrinsic unsafe
581core::core_arch::arm_shared::neon::generatedvld3_dup_f16function* Neon intrinsic unsafe
582core::core_arch::arm_shared::neon::generatedvld3_dup_f32function* Neon intrinsic unsafe
583core::core_arch::arm_shared::neon::generatedvld3_dup_p16function* Neon intrinsic unsafe
584core::core_arch::arm_shared::neon::generatedvld3_dup_p64function* Neon intrinsic unsafe
585core::core_arch::arm_shared::neon::generatedvld3_dup_p8function* Neon intrinsic unsafe
586core::core_arch::arm_shared::neon::generatedvld3_dup_s16function* Neon intrinsic unsafe
587core::core_arch::arm_shared::neon::generatedvld3_dup_s32function* Neon intrinsic unsafe
588core::core_arch::arm_shared::neon::generatedvld3_dup_s64function* Neon intrinsic unsafe
589core::core_arch::arm_shared::neon::generatedvld3_dup_s8function* Neon intrinsic unsafe
590core::core_arch::arm_shared::neon::generatedvld3_dup_u16function* Neon intrinsic unsafe
591core::core_arch::arm_shared::neon::generatedvld3_dup_u32function* Neon intrinsic unsafe
592core::core_arch::arm_shared::neon::generatedvld3_dup_u64function* Neon intrinsic unsafe
593core::core_arch::arm_shared::neon::generatedvld3_dup_u8function* Neon intrinsic unsafe
594core::core_arch::arm_shared::neon::generatedvld3_f16function* Neon intrinsic unsafe
595core::core_arch::arm_shared::neon::generatedvld3_f32function* Neon intrinsic unsafe
596core::core_arch::arm_shared::neon::generatedvld3_lane_f16function* Neon intrinsic unsafe
597core::core_arch::arm_shared::neon::generatedvld3_lane_f32function* Neon intrinsic unsafe
598core::core_arch::arm_shared::neon::generatedvld3_lane_p16function* Neon intrinsic unsafe
599core::core_arch::arm_shared::neon::generatedvld3_lane_p8function* Neon intrinsic unsafe
600core::core_arch::arm_shared::neon::generatedvld3_lane_s16function* Neon intrinsic unsafe
601core::core_arch::arm_shared::neon::generatedvld3_lane_s32function* Neon intrinsic unsafe
602core::core_arch::arm_shared::neon::generatedvld3_lane_s8function* Neon intrinsic unsafe
603core::core_arch::arm_shared::neon::generatedvld3_lane_u16function* Neon intrinsic unsafe
604core::core_arch::arm_shared::neon::generatedvld3_lane_u32function* Neon intrinsic unsafe
605core::core_arch::arm_shared::neon::generatedvld3_lane_u8function* Neon intrinsic unsafe
606core::core_arch::arm_shared::neon::generatedvld3_p16function* Neon intrinsic unsafe
607core::core_arch::arm_shared::neon::generatedvld3_p64function* Neon intrinsic unsafe
608core::core_arch::arm_shared::neon::generatedvld3_p8function* Neon intrinsic unsafe
609core::core_arch::arm_shared::neon::generatedvld3_s16function* Neon intrinsic unsafe
610core::core_arch::arm_shared::neon::generatedvld3_s32function* Neon intrinsic unsafe
611core::core_arch::arm_shared::neon::generatedvld3_s64function* Neon intrinsic unsafe
612core::core_arch::arm_shared::neon::generatedvld3_s8function* Neon intrinsic unsafe
613core::core_arch::arm_shared::neon::generatedvld3_u16function* Neon intrinsic unsafe
614core::core_arch::arm_shared::neon::generatedvld3_u32function* Neon intrinsic unsafe
615core::core_arch::arm_shared::neon::generatedvld3_u64function* Neon intrinsic unsafe
616core::core_arch::arm_shared::neon::generatedvld3_u8function* Neon intrinsic unsafe
617core::core_arch::arm_shared::neon::generatedvld3q_dup_f16function* Neon intrinsic unsafe
618core::core_arch::arm_shared::neon::generatedvld3q_dup_f32function* Neon intrinsic unsafe
619core::core_arch::arm_shared::neon::generatedvld3q_dup_p16function* Neon intrinsic unsafe
620core::core_arch::arm_shared::neon::generatedvld3q_dup_p8function* Neon intrinsic unsafe
621core::core_arch::arm_shared::neon::generatedvld3q_dup_s16function* Neon intrinsic unsafe
622core::core_arch::arm_shared::neon::generatedvld3q_dup_s32function* Neon intrinsic unsafe
623core::core_arch::arm_shared::neon::generatedvld3q_dup_s8function* Neon intrinsic unsafe
624core::core_arch::arm_shared::neon::generatedvld3q_dup_u16function* Neon intrinsic unsafe
625core::core_arch::arm_shared::neon::generatedvld3q_dup_u32function* Neon intrinsic unsafe
626core::core_arch::arm_shared::neon::generatedvld3q_dup_u8function* Neon intrinsic unsafe
627core::core_arch::arm_shared::neon::generatedvld3q_f16function* Neon intrinsic unsafe
628core::core_arch::arm_shared::neon::generatedvld3q_f32function* Neon intrinsic unsafe
629core::core_arch::arm_shared::neon::generatedvld3q_lane_f16function* Neon intrinsic unsafe
630core::core_arch::arm_shared::neon::generatedvld3q_lane_f32function* Neon intrinsic unsafe
631core::core_arch::arm_shared::neon::generatedvld3q_lane_p16function* Neon intrinsic unsafe
632core::core_arch::arm_shared::neon::generatedvld3q_lane_s16function* Neon intrinsic unsafe
633core::core_arch::arm_shared::neon::generatedvld3q_lane_s32function* Neon intrinsic unsafe
634core::core_arch::arm_shared::neon::generatedvld3q_lane_u16function* Neon intrinsic unsafe
635core::core_arch::arm_shared::neon::generatedvld3q_lane_u32function* Neon intrinsic unsafe
636core::core_arch::arm_shared::neon::generatedvld3q_p16function* Neon intrinsic unsafe
637core::core_arch::arm_shared::neon::generatedvld3q_p8function* Neon intrinsic unsafe
638core::core_arch::arm_shared::neon::generatedvld3q_s16function* Neon intrinsic unsafe
639core::core_arch::arm_shared::neon::generatedvld3q_s32function* Neon intrinsic unsafe
640core::core_arch::arm_shared::neon::generatedvld3q_s8function* Neon intrinsic unsafe
641core::core_arch::arm_shared::neon::generatedvld3q_u16function* Neon intrinsic unsafe
642core::core_arch::arm_shared::neon::generatedvld3q_u32function* Neon intrinsic unsafe
643core::core_arch::arm_shared::neon::generatedvld3q_u8function* Neon intrinsic unsafe
644core::core_arch::arm_shared::neon::generatedvld4_dup_f16function* Neon intrinsic unsafe
645core::core_arch::arm_shared::neon::generatedvld4_dup_f32function* Neon intrinsic unsafe
646core::core_arch::arm_shared::neon::generatedvld4_dup_p16function* Neon intrinsic unsafe
647core::core_arch::arm_shared::neon::generatedvld4_dup_p64function* Neon intrinsic unsafe
648core::core_arch::arm_shared::neon::generatedvld4_dup_p8function* Neon intrinsic unsafe
649core::core_arch::arm_shared::neon::generatedvld4_dup_s16function* Neon intrinsic unsafe
650core::core_arch::arm_shared::neon::generatedvld4_dup_s32function* Neon intrinsic unsafe
651core::core_arch::arm_shared::neon::generatedvld4_dup_s64function* Neon intrinsic unsafe
652core::core_arch::arm_shared::neon::generatedvld4_dup_s8function* Neon intrinsic unsafe
653core::core_arch::arm_shared::neon::generatedvld4_dup_u16function* Neon intrinsic unsafe
654core::core_arch::arm_shared::neon::generatedvld4_dup_u32function* Neon intrinsic unsafe
655core::core_arch::arm_shared::neon::generatedvld4_dup_u64function* Neon intrinsic unsafe
656core::core_arch::arm_shared::neon::generatedvld4_dup_u8function* Neon intrinsic unsafe
657core::core_arch::arm_shared::neon::generatedvld4_f16function* Neon intrinsic unsafe
658core::core_arch::arm_shared::neon::generatedvld4_f32function* Neon intrinsic unsafe
659core::core_arch::arm_shared::neon::generatedvld4_lane_f16function* Neon intrinsic unsafe
660core::core_arch::arm_shared::neon::generatedvld4_lane_f32function* Neon intrinsic unsafe
661core::core_arch::arm_shared::neon::generatedvld4_lane_p16function* Neon intrinsic unsafe
662core::core_arch::arm_shared::neon::generatedvld4_lane_p8function* Neon intrinsic unsafe
663core::core_arch::arm_shared::neon::generatedvld4_lane_s16function* Neon intrinsic unsafe
664core::core_arch::arm_shared::neon::generatedvld4_lane_s32function* Neon intrinsic unsafe
665core::core_arch::arm_shared::neon::generatedvld4_lane_s8function* Neon intrinsic unsafe
666core::core_arch::arm_shared::neon::generatedvld4_lane_u16function* Neon intrinsic unsafe
667core::core_arch::arm_shared::neon::generatedvld4_lane_u32function* Neon intrinsic unsafe
668core::core_arch::arm_shared::neon::generatedvld4_lane_u8function* Neon intrinsic unsafe
669core::core_arch::arm_shared::neon::generatedvld4_p16function* Neon intrinsic unsafe
670core::core_arch::arm_shared::neon::generatedvld4_p64function* Neon intrinsic unsafe
671core::core_arch::arm_shared::neon::generatedvld4_p8function* Neon intrinsic unsafe
672core::core_arch::arm_shared::neon::generatedvld4_s16function* Neon intrinsic unsafe
673core::core_arch::arm_shared::neon::generatedvld4_s32function* Neon intrinsic unsafe
674core::core_arch::arm_shared::neon::generatedvld4_s64function* Neon intrinsic unsafe
675core::core_arch::arm_shared::neon::generatedvld4_s8function* Neon intrinsic unsafe
676core::core_arch::arm_shared::neon::generatedvld4_u16function* Neon intrinsic unsafe
677core::core_arch::arm_shared::neon::generatedvld4_u32function* Neon intrinsic unsafe
678core::core_arch::arm_shared::neon::generatedvld4_u64function* Neon intrinsic unsafe
679core::core_arch::arm_shared::neon::generatedvld4_u8function* Neon intrinsic unsafe
680core::core_arch::arm_shared::neon::generatedvld4q_dup_f16function* Neon intrinsic unsafe
681core::core_arch::arm_shared::neon::generatedvld4q_dup_f32function* Neon intrinsic unsafe
682core::core_arch::arm_shared::neon::generatedvld4q_dup_p16function* Neon intrinsic unsafe
683core::core_arch::arm_shared::neon::generatedvld4q_dup_p8function* Neon intrinsic unsafe
684core::core_arch::arm_shared::neon::generatedvld4q_dup_s16function* Neon intrinsic unsafe
685core::core_arch::arm_shared::neon::generatedvld4q_dup_s32function* Neon intrinsic unsafe
686core::core_arch::arm_shared::neon::generatedvld4q_dup_s8function* Neon intrinsic unsafe
687core::core_arch::arm_shared::neon::generatedvld4q_dup_u16function* Neon intrinsic unsafe
688core::core_arch::arm_shared::neon::generatedvld4q_dup_u32function* Neon intrinsic unsafe
689core::core_arch::arm_shared::neon::generatedvld4q_dup_u8function* Neon intrinsic unsafe
690core::core_arch::arm_shared::neon::generatedvld4q_f16function* Neon intrinsic unsafe
691core::core_arch::arm_shared::neon::generatedvld4q_f32function* Neon intrinsic unsafe
692core::core_arch::arm_shared::neon::generatedvld4q_lane_f16function* Neon intrinsic unsafe
693core::core_arch::arm_shared::neon::generatedvld4q_lane_f32function* Neon intrinsic unsafe
694core::core_arch::arm_shared::neon::generatedvld4q_lane_p16function* Neon intrinsic unsafe
695core::core_arch::arm_shared::neon::generatedvld4q_lane_s16function* Neon intrinsic unsafe
696core::core_arch::arm_shared::neon::generatedvld4q_lane_s32function* Neon intrinsic unsafe
697core::core_arch::arm_shared::neon::generatedvld4q_lane_u16function* Neon intrinsic unsafe
698core::core_arch::arm_shared::neon::generatedvld4q_lane_u32function* Neon intrinsic unsafe
699core::core_arch::arm_shared::neon::generatedvld4q_p16function* Neon intrinsic unsafe
700core::core_arch::arm_shared::neon::generatedvld4q_p8function* Neon intrinsic unsafe
701core::core_arch::arm_shared::neon::generatedvld4q_s16function* Neon intrinsic unsafe
702core::core_arch::arm_shared::neon::generatedvld4q_s32function* Neon intrinsic unsafe
703core::core_arch::arm_shared::neon::generatedvld4q_s8function* Neon intrinsic unsafe
704core::core_arch::arm_shared::neon::generatedvld4q_u16function* Neon intrinsic unsafe
705core::core_arch::arm_shared::neon::generatedvld4q_u32function* Neon intrinsic unsafe
706core::core_arch::arm_shared::neon::generatedvld4q_u8function* Neon intrinsic unsafe
707core::core_arch::arm_shared::neon::generatedvldrq_p128function* Neon intrinsic unsafe
708core::core_arch::arm_shared::neon::generatedvst1_f16_x2function* Neon intrinsic unsafe
709core::core_arch::arm_shared::neon::generatedvst1_f16_x3function* Neon intrinsic unsafe
710core::core_arch::arm_shared::neon::generatedvst1_f16_x4function* Neon intrinsic unsafe
711core::core_arch::arm_shared::neon::generatedvst1_f32_x2function* Neon intrinsic unsafe
712core::core_arch::arm_shared::neon::generatedvst1_f32_x3function* Neon intrinsic unsafe
713core::core_arch::arm_shared::neon::generatedvst1_f32_x4function* Neon intrinsic unsafe
714core::core_arch::arm_shared::neon::generatedvst1_lane_f16function* Neon intrinsic unsafe
715core::core_arch::arm_shared::neon::generatedvst1_lane_f32function* Neon intrinsic unsafe
716core::core_arch::arm_shared::neon::generatedvst1_lane_p16function* Neon intrinsic unsafe
717core::core_arch::arm_shared::neon::generatedvst1_lane_p64function* Neon intrinsic unsafe
718core::core_arch::arm_shared::neon::generatedvst1_lane_p8function* Neon intrinsic unsafe
719core::core_arch::arm_shared::neon::generatedvst1_lane_s16function* Neon intrinsic unsafe
720core::core_arch::arm_shared::neon::generatedvst1_lane_s32function* Neon intrinsic unsafe
721core::core_arch::arm_shared::neon::generatedvst1_lane_s64function* Neon intrinsic unsafe
722core::core_arch::arm_shared::neon::generatedvst1_lane_s8function* Neon intrinsic unsafe
723core::core_arch::arm_shared::neon::generatedvst1_lane_u16function* Neon intrinsic unsafe
724core::core_arch::arm_shared::neon::generatedvst1_lane_u32function* Neon intrinsic unsafe
725core::core_arch::arm_shared::neon::generatedvst1_lane_u64function* Neon intrinsic unsafe
726core::core_arch::arm_shared::neon::generatedvst1_lane_u8function* Neon intrinsic unsafe
727core::core_arch::arm_shared::neon::generatedvst1_p16_x2function* Neon intrinsic unsafe
728core::core_arch::arm_shared::neon::generatedvst1_p16_x3function* Neon intrinsic unsafe
729core::core_arch::arm_shared::neon::generatedvst1_p16_x4function* Neon intrinsic unsafe
730core::core_arch::arm_shared::neon::generatedvst1_p64_x2function* Neon intrinsic unsafe
731core::core_arch::arm_shared::neon::generatedvst1_p64_x3function* Neon intrinsic unsafe
732core::core_arch::arm_shared::neon::generatedvst1_p64_x4function* Neon intrinsic unsafe
733core::core_arch::arm_shared::neon::generatedvst1_p8_x2function* Neon intrinsic unsafe
734core::core_arch::arm_shared::neon::generatedvst1_p8_x3function* Neon intrinsic unsafe
735core::core_arch::arm_shared::neon::generatedvst1_p8_x4function* Neon intrinsic unsafe
736core::core_arch::arm_shared::neon::generatedvst1_s16_x2function* Neon intrinsic unsafe
737core::core_arch::arm_shared::neon::generatedvst1_s16_x3function* Neon intrinsic unsafe
738core::core_arch::arm_shared::neon::generatedvst1_s16_x4function* Neon intrinsic unsafe
739core::core_arch::arm_shared::neon::generatedvst1_s32_x2function* Neon intrinsic unsafe
740core::core_arch::arm_shared::neon::generatedvst1_s32_x3function* Neon intrinsic unsafe
741core::core_arch::arm_shared::neon::generatedvst1_s32_x4function* Neon intrinsic unsafe
742core::core_arch::arm_shared::neon::generatedvst1_s64_x2function* Neon intrinsic unsafe
743core::core_arch::arm_shared::neon::generatedvst1_s64_x3function* Neon intrinsic unsafe
744core::core_arch::arm_shared::neon::generatedvst1_s64_x4function* Neon intrinsic unsafe
745core::core_arch::arm_shared::neon::generatedvst1_s8_x2function* Neon intrinsic unsafe
746core::core_arch::arm_shared::neon::generatedvst1_s8_x3function* Neon intrinsic unsafe
747core::core_arch::arm_shared::neon::generatedvst1_s8_x4function* Neon intrinsic unsafe
748core::core_arch::arm_shared::neon::generatedvst1_u16_x2function* Neon intrinsic unsafe
749core::core_arch::arm_shared::neon::generatedvst1_u16_x3function* Neon intrinsic unsafe
750core::core_arch::arm_shared::neon::generatedvst1_u16_x4function* Neon intrinsic unsafe
751core::core_arch::arm_shared::neon::generatedvst1_u32_x2function* Neon intrinsic unsafe
752core::core_arch::arm_shared::neon::generatedvst1_u32_x3function* Neon intrinsic unsafe
753core::core_arch::arm_shared::neon::generatedvst1_u32_x4function* Neon intrinsic unsafe
754core::core_arch::arm_shared::neon::generatedvst1_u64_x2function* Neon intrinsic unsafe
755core::core_arch::arm_shared::neon::generatedvst1_u64_x3function* Neon intrinsic unsafe
756core::core_arch::arm_shared::neon::generatedvst1_u64_x4function* Neon intrinsic unsafe
757core::core_arch::arm_shared::neon::generatedvst1_u8_x2function* Neon intrinsic unsafe
758core::core_arch::arm_shared::neon::generatedvst1_u8_x3function* Neon intrinsic unsafe
759core::core_arch::arm_shared::neon::generatedvst1_u8_x4function* Neon intrinsic unsafe
760core::core_arch::arm_shared::neon::generatedvst1q_f16_x2function* Neon intrinsic unsafe
761core::core_arch::arm_shared::neon::generatedvst1q_f16_x3function* Neon intrinsic unsafe
762core::core_arch::arm_shared::neon::generatedvst1q_f16_x4function* Neon intrinsic unsafe
763core::core_arch::arm_shared::neon::generatedvst1q_f32_x2function* Neon intrinsic unsafe
764core::core_arch::arm_shared::neon::generatedvst1q_f32_x3function* Neon intrinsic unsafe
765core::core_arch::arm_shared::neon::generatedvst1q_f32_x4function* Neon intrinsic unsafe
766core::core_arch::arm_shared::neon::generatedvst1q_lane_f16function* Neon intrinsic unsafe
767core::core_arch::arm_shared::neon::generatedvst1q_lane_f32function* Neon intrinsic unsafe
768core::core_arch::arm_shared::neon::generatedvst1q_lane_p16function* Neon intrinsic unsafe
769core::core_arch::arm_shared::neon::generatedvst1q_lane_p64function* Neon intrinsic unsafe
770core::core_arch::arm_shared::neon::generatedvst1q_lane_p8function* Neon intrinsic unsafe
771core::core_arch::arm_shared::neon::generatedvst1q_lane_s16function* Neon intrinsic unsafe
772core::core_arch::arm_shared::neon::generatedvst1q_lane_s32function* Neon intrinsic unsafe
773core::core_arch::arm_shared::neon::generatedvst1q_lane_s64function* Neon intrinsic unsafe
774core::core_arch::arm_shared::neon::generatedvst1q_lane_s8function* Neon intrinsic unsafe
775core::core_arch::arm_shared::neon::generatedvst1q_lane_u16function* Neon intrinsic unsafe
776core::core_arch::arm_shared::neon::generatedvst1q_lane_u32function* Neon intrinsic unsafe
777core::core_arch::arm_shared::neon::generatedvst1q_lane_u64function* Neon intrinsic unsafe
778core::core_arch::arm_shared::neon::generatedvst1q_lane_u8function* Neon intrinsic unsafe
779core::core_arch::arm_shared::neon::generatedvst1q_p16_x2function* Neon intrinsic unsafe
780core::core_arch::arm_shared::neon::generatedvst1q_p16_x3function* Neon intrinsic unsafe
781core::core_arch::arm_shared::neon::generatedvst1q_p16_x4function* Neon intrinsic unsafe
782core::core_arch::arm_shared::neon::generatedvst1q_p64_x2function* Neon intrinsic unsafe
783core::core_arch::arm_shared::neon::generatedvst1q_p64_x3function* Neon intrinsic unsafe
784core::core_arch::arm_shared::neon::generatedvst1q_p64_x4function* Neon intrinsic unsafe
785core::core_arch::arm_shared::neon::generatedvst1q_p8_x2function* Neon intrinsic unsafe
786core::core_arch::arm_shared::neon::generatedvst1q_p8_x3function* Neon intrinsic unsafe
787core::core_arch::arm_shared::neon::generatedvst1q_p8_x4function* Neon intrinsic unsafe
788core::core_arch::arm_shared::neon::generatedvst1q_s16_x2function* Neon intrinsic unsafe
789core::core_arch::arm_shared::neon::generatedvst1q_s16_x3function* Neon intrinsic unsafe
790core::core_arch::arm_shared::neon::generatedvst1q_s16_x4function* Neon intrinsic unsafe
791core::core_arch::arm_shared::neon::generatedvst1q_s32_x2function* Neon intrinsic unsafe
792core::core_arch::arm_shared::neon::generatedvst1q_s32_x3function* Neon intrinsic unsafe
793core::core_arch::arm_shared::neon::generatedvst1q_s32_x4function* Neon intrinsic unsafe
794core::core_arch::arm_shared::neon::generatedvst1q_s64_x2function* Neon intrinsic unsafe
795core::core_arch::arm_shared::neon::generatedvst1q_s64_x3function* Neon intrinsic unsafe
796core::core_arch::arm_shared::neon::generatedvst1q_s64_x4function* Neon intrinsic unsafe
797core::core_arch::arm_shared::neon::generatedvst1q_s8_x2function* Neon intrinsic unsafe
798core::core_arch::arm_shared::neon::generatedvst1q_s8_x3function* Neon intrinsic unsafe
799core::core_arch::arm_shared::neon::generatedvst1q_s8_x4function* Neon intrinsic unsafe
800core::core_arch::arm_shared::neon::generatedvst1q_u16_x2function* Neon intrinsic unsafe
801core::core_arch::arm_shared::neon::generatedvst1q_u16_x3function* Neon intrinsic unsafe
802core::core_arch::arm_shared::neon::generatedvst1q_u16_x4function* Neon intrinsic unsafe
803core::core_arch::arm_shared::neon::generatedvst1q_u32_x2function* Neon intrinsic unsafe
804core::core_arch::arm_shared::neon::generatedvst1q_u32_x3function* Neon intrinsic unsafe
805core::core_arch::arm_shared::neon::generatedvst1q_u32_x4function* Neon intrinsic unsafe
806core::core_arch::arm_shared::neon::generatedvst1q_u64_x2function* Neon intrinsic unsafe
807core::core_arch::arm_shared::neon::generatedvst1q_u64_x3function* Neon intrinsic unsafe
808core::core_arch::arm_shared::neon::generatedvst1q_u64_x4function* Neon intrinsic unsafe
809core::core_arch::arm_shared::neon::generatedvst1q_u8_x2function* Neon intrinsic unsafe
810core::core_arch::arm_shared::neon::generatedvst1q_u8_x3function* Neon intrinsic unsafe
811core::core_arch::arm_shared::neon::generatedvst1q_u8_x4function* Neon intrinsic unsafe
812core::core_arch::arm_shared::neon::generatedvst2_f16function* Neon intrinsic unsafe
813core::core_arch::arm_shared::neon::generatedvst2_f32function* Neon intrinsic unsafe
814core::core_arch::arm_shared::neon::generatedvst2_lane_f16function* Neon intrinsic unsafe
815core::core_arch::arm_shared::neon::generatedvst2_lane_f32function* Neon intrinsic unsafe
816core::core_arch::arm_shared::neon::generatedvst2_lane_p16function* Neon intrinsic unsafe
817core::core_arch::arm_shared::neon::generatedvst2_lane_p8function* Neon intrinsic unsafe
818core::core_arch::arm_shared::neon::generatedvst2_lane_s16function* Neon intrinsic unsafe
819core::core_arch::arm_shared::neon::generatedvst2_lane_s32function* Neon intrinsic unsafe
820core::core_arch::arm_shared::neon::generatedvst2_lane_s8function* Neon intrinsic unsafe
821core::core_arch::arm_shared::neon::generatedvst2_lane_u16function* Neon intrinsic unsafe
822core::core_arch::arm_shared::neon::generatedvst2_lane_u32function* Neon intrinsic unsafe
823core::core_arch::arm_shared::neon::generatedvst2_lane_u8function* Neon intrinsic unsafe
824core::core_arch::arm_shared::neon::generatedvst2_p16function* Neon intrinsic unsafe
825core::core_arch::arm_shared::neon::generatedvst2_p64function* Neon intrinsic unsafe
826core::core_arch::arm_shared::neon::generatedvst2_p8function* Neon intrinsic unsafe
827core::core_arch::arm_shared::neon::generatedvst2_s16function* Neon intrinsic unsafe
828core::core_arch::arm_shared::neon::generatedvst2_s32function* Neon intrinsic unsafe
829core::core_arch::arm_shared::neon::generatedvst2_s64function* Neon intrinsic unsafe
830core::core_arch::arm_shared::neon::generatedvst2_s8function* Neon intrinsic unsafe
831core::core_arch::arm_shared::neon::generatedvst2_u16function* Neon intrinsic unsafe
832core::core_arch::arm_shared::neon::generatedvst2_u32function* Neon intrinsic unsafe
833core::core_arch::arm_shared::neon::generatedvst2_u64function* Neon intrinsic unsafe
834core::core_arch::arm_shared::neon::generatedvst2_u8function* Neon intrinsic unsafe
835core::core_arch::arm_shared::neon::generatedvst2q_f16function* Neon intrinsic unsafe
836core::core_arch::arm_shared::neon::generatedvst2q_f32function* Neon intrinsic unsafe
837core::core_arch::arm_shared::neon::generatedvst2q_lane_f16function* Neon intrinsic unsafe
838core::core_arch::arm_shared::neon::generatedvst2q_lane_f32function* Neon intrinsic unsafe
839core::core_arch::arm_shared::neon::generatedvst2q_lane_p16function* Neon intrinsic unsafe
840core::core_arch::arm_shared::neon::generatedvst2q_lane_s16function* Neon intrinsic unsafe
841core::core_arch::arm_shared::neon::generatedvst2q_lane_s32function* Neon intrinsic unsafe
842core::core_arch::arm_shared::neon::generatedvst2q_lane_u16function* Neon intrinsic unsafe
843core::core_arch::arm_shared::neon::generatedvst2q_lane_u32function* Neon intrinsic unsafe
844core::core_arch::arm_shared::neon::generatedvst2q_p16function* Neon intrinsic unsafe
845core::core_arch::arm_shared::neon::generatedvst2q_p8function* Neon intrinsic unsafe
846core::core_arch::arm_shared::neon::generatedvst2q_s16function* Neon intrinsic unsafe
847core::core_arch::arm_shared::neon::generatedvst2q_s32function* Neon intrinsic unsafe
848core::core_arch::arm_shared::neon::generatedvst2q_s8function* Neon intrinsic unsafe
849core::core_arch::arm_shared::neon::generatedvst2q_u16function* Neon intrinsic unsafe
850core::core_arch::arm_shared::neon::generatedvst2q_u32function* Neon intrinsic unsafe
851core::core_arch::arm_shared::neon::generatedvst2q_u8function* Neon intrinsic unsafe
852core::core_arch::arm_shared::neon::generatedvst3_f16function* Neon intrinsic unsafe
853core::core_arch::arm_shared::neon::generatedvst3_f32function* Neon intrinsic unsafe
854core::core_arch::arm_shared::neon::generatedvst3_lane_f16function* Neon intrinsic unsafe
855core::core_arch::arm_shared::neon::generatedvst3_lane_f32function* Neon intrinsic unsafe
856core::core_arch::arm_shared::neon::generatedvst3_lane_p16function* Neon intrinsic unsafe
857core::core_arch::arm_shared::neon::generatedvst3_lane_p8function* Neon intrinsic unsafe
858core::core_arch::arm_shared::neon::generatedvst3_lane_s16function* Neon intrinsic unsafe
859core::core_arch::arm_shared::neon::generatedvst3_lane_s32function* Neon intrinsic unsafe
860core::core_arch::arm_shared::neon::generatedvst3_lane_s8function* Neon intrinsic unsafe
861core::core_arch::arm_shared::neon::generatedvst3_lane_u16function* Neon intrinsic unsafe
862core::core_arch::arm_shared::neon::generatedvst3_lane_u32function* Neon intrinsic unsafe
863core::core_arch::arm_shared::neon::generatedvst3_lane_u8function* Neon intrinsic unsafe
864core::core_arch::arm_shared::neon::generatedvst3_p16function* Neon intrinsic unsafe
865core::core_arch::arm_shared::neon::generatedvst3_p64function* Neon intrinsic unsafe
866core::core_arch::arm_shared::neon::generatedvst3_p8function* Neon intrinsic unsafe
867core::core_arch::arm_shared::neon::generatedvst3_s16function* Neon intrinsic unsafe
868core::core_arch::arm_shared::neon::generatedvst3_s32function* Neon intrinsic unsafe
869core::core_arch::arm_shared::neon::generatedvst3_s64function* Neon intrinsic unsafe
870core::core_arch::arm_shared::neon::generatedvst3_s8function* Neon intrinsic unsafe
871core::core_arch::arm_shared::neon::generatedvst3_u16function* Neon intrinsic unsafe
872core::core_arch::arm_shared::neon::generatedvst3_u32function* Neon intrinsic unsafe
873core::core_arch::arm_shared::neon::generatedvst3_u64function* Neon intrinsic unsafe
874core::core_arch::arm_shared::neon::generatedvst3_u8function* Neon intrinsic unsafe
875core::core_arch::arm_shared::neon::generatedvst3q_f16function* Neon intrinsic unsafe
876core::core_arch::arm_shared::neon::generatedvst3q_f32function* Neon intrinsic unsafe
877core::core_arch::arm_shared::neon::generatedvst3q_lane_f16function* Neon intrinsic unsafe
878core::core_arch::arm_shared::neon::generatedvst3q_lane_f32function* Neon intrinsic unsafe
879core::core_arch::arm_shared::neon::generatedvst3q_lane_p16function* Neon intrinsic unsafe
880core::core_arch::arm_shared::neon::generatedvst3q_lane_s16function* Neon intrinsic unsafe
881core::core_arch::arm_shared::neon::generatedvst3q_lane_s32function* Neon intrinsic unsafe
882core::core_arch::arm_shared::neon::generatedvst3q_lane_u16function* Neon intrinsic unsafe
883core::core_arch::arm_shared::neon::generatedvst3q_lane_u32function* Neon intrinsic unsafe
884core::core_arch::arm_shared::neon::generatedvst3q_p16function* Neon intrinsic unsafe
885core::core_arch::arm_shared::neon::generatedvst3q_p8function* Neon intrinsic unsafe
886core::core_arch::arm_shared::neon::generatedvst3q_s16function* Neon intrinsic unsafe
887core::core_arch::arm_shared::neon::generatedvst3q_s32function* Neon intrinsic unsafe
888core::core_arch::arm_shared::neon::generatedvst3q_s8function* Neon intrinsic unsafe
889core::core_arch::arm_shared::neon::generatedvst3q_u16function* Neon intrinsic unsafe
890core::core_arch::arm_shared::neon::generatedvst3q_u32function* Neon intrinsic unsafe
891core::core_arch::arm_shared::neon::generatedvst3q_u8function* Neon intrinsic unsafe
892core::core_arch::arm_shared::neon::generatedvst4_f16function* Neon intrinsic unsafe
893core::core_arch::arm_shared::neon::generatedvst4_f32function* Neon intrinsic unsafe
894core::core_arch::arm_shared::neon::generatedvst4_lane_f16function* Neon intrinsic unsafe
895core::core_arch::arm_shared::neon::generatedvst4_lane_f32function* Neon intrinsic unsafe
896core::core_arch::arm_shared::neon::generatedvst4_lane_p16function* Neon intrinsic unsafe
897core::core_arch::arm_shared::neon::generatedvst4_lane_p8function* Neon intrinsic unsafe
898core::core_arch::arm_shared::neon::generatedvst4_lane_s16function* Neon intrinsic unsafe
899core::core_arch::arm_shared::neon::generatedvst4_lane_s32function* Neon intrinsic unsafe
900core::core_arch::arm_shared::neon::generatedvst4_lane_s8function* Neon intrinsic unsafe
901core::core_arch::arm_shared::neon::generatedvst4_lane_u16function* Neon intrinsic unsafe
902core::core_arch::arm_shared::neon::generatedvst4_lane_u32function* Neon intrinsic unsafe
903core::core_arch::arm_shared::neon::generatedvst4_lane_u8function* Neon intrinsic unsafe
904core::core_arch::arm_shared::neon::generatedvst4_p16function* Neon intrinsic unsafe
905core::core_arch::arm_shared::neon::generatedvst4_p64function* Neon intrinsic unsafe
906core::core_arch::arm_shared::neon::generatedvst4_p8function* Neon intrinsic unsafe
907core::core_arch::arm_shared::neon::generatedvst4_s16function* Neon intrinsic unsafe
908core::core_arch::arm_shared::neon::generatedvst4_s32function* Neon intrinsic unsafe
909core::core_arch::arm_shared::neon::generatedvst4_s64function* Neon intrinsic unsafe
910core::core_arch::arm_shared::neon::generatedvst4_s8function* Neon intrinsic unsafe
911core::core_arch::arm_shared::neon::generatedvst4_u16function* Neon intrinsic unsafe
912core::core_arch::arm_shared::neon::generatedvst4_u32function* Neon intrinsic unsafe
913core::core_arch::arm_shared::neon::generatedvst4_u64function* Neon intrinsic unsafe
914core::core_arch::arm_shared::neon::generatedvst4_u8function* Neon intrinsic unsafe
915core::core_arch::arm_shared::neon::generatedvst4q_f16function* Neon intrinsic unsafe
916core::core_arch::arm_shared::neon::generatedvst4q_f32function* Neon intrinsic unsafe
917core::core_arch::arm_shared::neon::generatedvst4q_lane_f16function* Neon intrinsic unsafe
918core::core_arch::arm_shared::neon::generatedvst4q_lane_f32function* Neon intrinsic unsafe
919core::core_arch::arm_shared::neon::generatedvst4q_lane_p16function* Neon intrinsic unsafe
920core::core_arch::arm_shared::neon::generatedvst4q_lane_s16function* Neon intrinsic unsafe
921core::core_arch::arm_shared::neon::generatedvst4q_lane_s32function* Neon intrinsic unsafe
922core::core_arch::arm_shared::neon::generatedvst4q_lane_u16function* Neon intrinsic unsafe
923core::core_arch::arm_shared::neon::generatedvst4q_lane_u32function* Neon intrinsic unsafe
924core::core_arch::arm_shared::neon::generatedvst4q_p16function* Neon intrinsic unsafe
925core::core_arch::arm_shared::neon::generatedvst4q_p8function* Neon intrinsic unsafe
926core::core_arch::arm_shared::neon::generatedvst4q_s16function* Neon intrinsic unsafe
927core::core_arch::arm_shared::neon::generatedvst4q_s32function* Neon intrinsic unsafe
928core::core_arch::arm_shared::neon::generatedvst4q_s8function* Neon intrinsic unsafe
929core::core_arch::arm_shared::neon::generatedvst4q_u16function* Neon intrinsic unsafe
930core::core_arch::arm_shared::neon::generatedvst4q_u32function* Neon intrinsic unsafe
931core::core_arch::arm_shared::neon::generatedvst4q_u8function* Neon intrinsic unsafe
932core::core_arch::arm_shared::neon::generatedvstrq_p128function* Neon intrinsic unsafe
933core::core_arch::hexagon::v128q6_q_and_qqfunction
934core::core_arch::hexagon::v128q6_q_and_qqnfunction
935core::core_arch::hexagon::v128q6_q_not_qfunction
936core::core_arch::hexagon::v128q6_q_or_qqfunction
937core::core_arch::hexagon::v128q6_q_or_qqnfunction
938core::core_arch::hexagon::v128q6_q_vand_vrfunction
939core::core_arch::hexagon::v128q6_q_vandor_qvrfunction
940core::core_arch::hexagon::v128q6_q_vcmp_eq_vbvbfunction
941core::core_arch::hexagon::v128q6_q_vcmp_eq_vhvhfunction
942core::core_arch::hexagon::v128q6_q_vcmp_eq_vwvwfunction
943core::core_arch::hexagon::v128q6_q_vcmp_eqand_qvbvbfunction
944core::core_arch::hexagon::v128q6_q_vcmp_eqand_qvhvhfunction
945core::core_arch::hexagon::v128q6_q_vcmp_eqand_qvwvwfunction
946core::core_arch::hexagon::v128q6_q_vcmp_eqor_qvbvbfunction
947core::core_arch::hexagon::v128q6_q_vcmp_eqor_qvhvhfunction
948core::core_arch::hexagon::v128q6_q_vcmp_eqor_qvwvwfunction
949core::core_arch::hexagon::v128q6_q_vcmp_eqxacc_qvbvbfunction
950core::core_arch::hexagon::v128q6_q_vcmp_eqxacc_qvhvhfunction
951core::core_arch::hexagon::v128q6_q_vcmp_eqxacc_qvwvwfunction
952core::core_arch::hexagon::v128q6_q_vcmp_gt_vbvbfunction
953core::core_arch::hexagon::v128q6_q_vcmp_gt_vhfvhffunction
954core::core_arch::hexagon::v128q6_q_vcmp_gt_vhvhfunction
955core::core_arch::hexagon::v128q6_q_vcmp_gt_vsfvsffunction
956core::core_arch::hexagon::v128q6_q_vcmp_gt_vubvubfunction
957core::core_arch::hexagon::v128q6_q_vcmp_gt_vuhvuhfunction
958core::core_arch::hexagon::v128q6_q_vcmp_gt_vuwvuwfunction
959core::core_arch::hexagon::v128q6_q_vcmp_gt_vwvwfunction
960core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvbvbfunction
961core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvhfvhffunction
962core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvhvhfunction
963core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvsfvsffunction
964core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvubvubfunction
965core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvuhvuhfunction
966core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvuwvuwfunction
967core::core_arch::hexagon::v128q6_q_vcmp_gtand_qvwvwfunction
968core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvbvbfunction
969core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvhfvhffunction
970core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvhvhfunction
971core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvsfvsffunction
972core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvubvubfunction
973core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvuhvuhfunction
974core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvuwvuwfunction
975core::core_arch::hexagon::v128q6_q_vcmp_gtor_qvwvwfunction
976core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvbvbfunction
977core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvhfvhffunction
978core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvhvhfunction
979core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvsfvsffunction
980core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvubvubfunction
981core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvuhvuhfunction
982core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvuwvuwfunction
983core::core_arch::hexagon::v128q6_q_vcmp_gtxacc_qvwvwfunction
984core::core_arch::hexagon::v128q6_q_vsetq2_rfunction
985core::core_arch::hexagon::v128q6_q_vsetq_rfunction
986core::core_arch::hexagon::v128q6_q_xor_qqfunction
987core::core_arch::hexagon::v128q6_qb_vshuffe_qhqhfunction
988core::core_arch::hexagon::v128q6_qh_vshuffe_qwqwfunction
989core::core_arch::hexagon::v128q6_r_vextract_vrfunction
990core::core_arch::hexagon::v128q6_v_equals_vfunction
991core::core_arch::hexagon::v128q6_v_hi_wfunction
992core::core_arch::hexagon::v128q6_v_lo_wfunction
993core::core_arch::hexagon::v128q6_v_vabs_vfunction
994core::core_arch::hexagon::v128q6_v_valign_vvifunction
995core::core_arch::hexagon::v128q6_v_valign_vvrfunction
996core::core_arch::hexagon::v128q6_v_vand_qnrfunction
997core::core_arch::hexagon::v128q6_v_vand_qnvfunction
998core::core_arch::hexagon::v128q6_v_vand_qrfunction
999core::core_arch::hexagon::v128q6_v_vand_qvfunction
1000core::core_arch::hexagon::v128q6_v_vand_vvfunction
1001core::core_arch::hexagon::v128q6_v_vandor_vqnrfunction
1002core::core_arch::hexagon::v128q6_v_vandor_vqrfunction
1003core::core_arch::hexagon::v128q6_v_vdelta_vvfunction
1004core::core_arch::hexagon::v128q6_v_vfmax_vvfunction
1005core::core_arch::hexagon::v128q6_v_vfmin_vvfunction
1006core::core_arch::hexagon::v128q6_v_vfneg_vfunction
1007core::core_arch::hexagon::v128q6_v_vgetqfext_vrfunction
1008core::core_arch::hexagon::v128q6_v_vlalign_vvifunction
1009core::core_arch::hexagon::v128q6_v_vlalign_vvrfunction
1010core::core_arch::hexagon::v128q6_v_vmux_qvvfunction
1011core::core_arch::hexagon::v128q6_v_vnot_vfunction
1012core::core_arch::hexagon::v128q6_v_vor_vvfunction
1013core::core_arch::hexagon::v128q6_v_vrdelta_vvfunction
1014core::core_arch::hexagon::v128q6_v_vror_vrfunction
1015core::core_arch::hexagon::v128q6_v_vsetqfext_vrfunction
1016core::core_arch::hexagon::v128q6_v_vsplat_rfunction
1017core::core_arch::hexagon::v128q6_v_vxor_vvfunction
1018core::core_arch::hexagon::v128q6_v_vzerofunction
1019core::core_arch::hexagon::v128q6_vb_condacc_qnvbvbfunction
1020core::core_arch::hexagon::v128q6_vb_condacc_qvbvbfunction
1021core::core_arch::hexagon::v128q6_vb_condnac_qnvbvbfunction
1022core::core_arch::hexagon::v128q6_vb_condnac_qvbvbfunction
1023core::core_arch::hexagon::v128q6_vb_prefixsum_qfunction
1024core::core_arch::hexagon::v128q6_vb_vabs_vbfunction
1025core::core_arch::hexagon::v128q6_vb_vabs_vb_satfunction
1026core::core_arch::hexagon::v128q6_vb_vadd_vbvbfunction
1027core::core_arch::hexagon::v128q6_vb_vadd_vbvb_satfunction
1028core::core_arch::hexagon::v128q6_vb_vasr_vhvhr_rnd_satfunction
1029core::core_arch::hexagon::v128q6_vb_vasr_vhvhr_satfunction
1030core::core_arch::hexagon::v128q6_vb_vavg_vbvbfunction
1031core::core_arch::hexagon::v128q6_vb_vavg_vbvb_rndfunction
1032core::core_arch::hexagon::v128q6_vb_vcvt_vhfvhffunction
1033core::core_arch::hexagon::v128q6_vb_vdeal_vbfunction
1034core::core_arch::hexagon::v128q6_vb_vdeale_vbvbfunction
1035core::core_arch::hexagon::v128q6_vb_vlut32_vbvbifunction
1036core::core_arch::hexagon::v128q6_vb_vlut32_vbvbrfunction
1037core::core_arch::hexagon::v128q6_vb_vlut32_vbvbr_nomatchfunction
1038core::core_arch::hexagon::v128q6_vb_vlut32or_vbvbvbifunction
1039core::core_arch::hexagon::v128q6_vb_vlut32or_vbvbvbrfunction
1040core::core_arch::hexagon::v128q6_vb_vmax_vbvbfunction
1041core::core_arch::hexagon::v128q6_vb_vmin_vbvbfunction
1042core::core_arch::hexagon::v128q6_vb_vnavg_vbvbfunction
1043core::core_arch::hexagon::v128q6_vb_vnavg_vubvubfunction
1044core::core_arch::hexagon::v128q6_vb_vpack_vhvh_satfunction
1045core::core_arch::hexagon::v128q6_vb_vpacke_vhvhfunction
1046core::core_arch::hexagon::v128q6_vb_vpacko_vhvhfunction
1047core::core_arch::hexagon::v128q6_vb_vround_vhvh_satfunction
1048core::core_arch::hexagon::v128q6_vb_vshuff_vbfunction
1049core::core_arch::hexagon::v128q6_vb_vshuffe_vbvbfunction
1050core::core_arch::hexagon::v128q6_vb_vshuffo_vbvbfunction
1051core::core_arch::hexagon::v128q6_vb_vsplat_rfunction
1052core::core_arch::hexagon::v128q6_vb_vsub_vbvbfunction
1053core::core_arch::hexagon::v128q6_vb_vsub_vbvb_satfunction
1054core::core_arch::hexagon::v128q6_vgather_aqrmvhfunction
1055core::core_arch::hexagon::v128q6_vgather_aqrmvwfunction
1056core::core_arch::hexagon::v128q6_vgather_aqrmwwfunction
1057core::core_arch::hexagon::v128q6_vgather_armvhfunction
1058core::core_arch::hexagon::v128q6_vgather_armvwfunction
1059core::core_arch::hexagon::v128q6_vgather_armwwfunction
1060core::core_arch::hexagon::v128q6_vh_condacc_qnvhvhfunction
1061core::core_arch::hexagon::v128q6_vh_condacc_qvhvhfunction
1062core::core_arch::hexagon::v128q6_vh_condnac_qnvhvhfunction
1063core::core_arch::hexagon::v128q6_vh_condnac_qvhvhfunction
1064core::core_arch::hexagon::v128q6_vh_equals_vhffunction
1065core::core_arch::hexagon::v128q6_vh_prefixsum_qfunction
1066core::core_arch::hexagon::v128q6_vh_vabs_vhfunction
1067core::core_arch::hexagon::v128q6_vh_vabs_vh_satfunction
1068core::core_arch::hexagon::v128q6_vh_vadd_vclb_vhvhfunction
1069core::core_arch::hexagon::v128q6_vh_vadd_vhvhfunction
1070core::core_arch::hexagon::v128q6_vh_vadd_vhvh_satfunction
1071core::core_arch::hexagon::v128q6_vh_vasl_vhrfunction
1072core::core_arch::hexagon::v128q6_vh_vasl_vhvhfunction
1073core::core_arch::hexagon::v128q6_vh_vaslacc_vhvhrfunction
1074core::core_arch::hexagon::v128q6_vh_vasr_vhrfunction
1075core::core_arch::hexagon::v128q6_vh_vasr_vhvhfunction
1076core::core_arch::hexagon::v128q6_vh_vasr_vwvwrfunction
1077core::core_arch::hexagon::v128q6_vh_vasr_vwvwr_rnd_satfunction
1078core::core_arch::hexagon::v128q6_vh_vasr_vwvwr_satfunction
1079core::core_arch::hexagon::v128q6_vh_vasracc_vhvhrfunction
1080core::core_arch::hexagon::v128q6_vh_vavg_vhvhfunction
1081core::core_arch::hexagon::v128q6_vh_vavg_vhvh_rndfunction
1082core::core_arch::hexagon::v128q6_vh_vcvt_vhffunction
1083core::core_arch::hexagon::v128q6_vh_vdeal_vhfunction
1084core::core_arch::hexagon::v128q6_vh_vdmpy_vubrbfunction
1085core::core_arch::hexagon::v128q6_vh_vdmpyacc_vhvubrbfunction
1086core::core_arch::hexagon::v128q6_vh_vlsr_vhvhfunction
1087core::core_arch::hexagon::v128q6_vh_vmax_vhvhfunction
1088core::core_arch::hexagon::v128q6_vh_vmin_vhvhfunction
1089core::core_arch::hexagon::v128q6_vh_vmpy_vhrh_s1_rnd_satfunction
1090core::core_arch::hexagon::v128q6_vh_vmpy_vhrh_s1_satfunction
1091core::core_arch::hexagon::v128q6_vh_vmpy_vhvh_s1_rnd_satfunction
1092core::core_arch::hexagon::v128q6_vh_vmpyi_vhrbfunction
1093core::core_arch::hexagon::v128q6_vh_vmpyi_vhvhfunction
1094core::core_arch::hexagon::v128q6_vh_vmpyiacc_vhvhrbfunction
1095core::core_arch::hexagon::v128q6_vh_vmpyiacc_vhvhvhfunction
1096core::core_arch::hexagon::v128q6_vh_vnavg_vhvhfunction
1097core::core_arch::hexagon::v128q6_vh_vnormamt_vhfunction
1098core::core_arch::hexagon::v128q6_vh_vpack_vwvw_satfunction
1099core::core_arch::hexagon::v128q6_vh_vpacke_vwvwfunction
1100core::core_arch::hexagon::v128q6_vh_vpacko_vwvwfunction
1101core::core_arch::hexagon::v128q6_vh_vpopcount_vhfunction
1102core::core_arch::hexagon::v128q6_vh_vround_vwvw_satfunction
1103core::core_arch::hexagon::v128q6_vh_vsat_vwvwfunction
1104core::core_arch::hexagon::v128q6_vh_vshuff_vhfunction
1105core::core_arch::hexagon::v128q6_vh_vshuffe_vhvhfunction
1106core::core_arch::hexagon::v128q6_vh_vshuffo_vhvhfunction
1107core::core_arch::hexagon::v128q6_vh_vsplat_rfunction
1108core::core_arch::hexagon::v128q6_vh_vsub_vhvhfunction
1109core::core_arch::hexagon::v128q6_vh_vsub_vhvh_satfunction
1110core::core_arch::hexagon::v128q6_vhf_equals_vhfunction
1111core::core_arch::hexagon::v128q6_vhf_equals_vqf16function
1112core::core_arch::hexagon::v128q6_vhf_equals_wqf32function
1113core::core_arch::hexagon::v128q6_vhf_vabs_vhffunction
1114core::core_arch::hexagon::v128q6_vhf_vadd_vhfvhffunction
1115core::core_arch::hexagon::v128q6_vhf_vcvt_vhfunction
1116core::core_arch::hexagon::v128q6_vhf_vcvt_vsfvsffunction
1117core::core_arch::hexagon::v128q6_vhf_vcvt_vuhfunction
1118core::core_arch::hexagon::v128q6_vhf_vfmax_vhfvhffunction
1119core::core_arch::hexagon::v128q6_vhf_vfmin_vhfvhffunction
1120core::core_arch::hexagon::v128q6_vhf_vfneg_vhffunction
1121core::core_arch::hexagon::v128q6_vhf_vmax_vhfvhffunction
1122core::core_arch::hexagon::v128q6_vhf_vmin_vhfvhffunction
1123core::core_arch::hexagon::v128q6_vhf_vmpy_vhfvhffunction
1124core::core_arch::hexagon::v128q6_vhf_vmpyacc_vhfvhfvhffunction
1125core::core_arch::hexagon::v128q6_vhf_vsub_vhfvhffunction
1126core::core_arch::hexagon::v128q6_vmem_qnrivfunction
1127core::core_arch::hexagon::v128q6_vmem_qnriv_ntfunction
1128core::core_arch::hexagon::v128q6_vmem_qrivfunction
1129core::core_arch::hexagon::v128q6_vmem_qriv_ntfunction
1130core::core_arch::hexagon::v128q6_vqf16_vadd_vhfvhffunction
1131core::core_arch::hexagon::v128q6_vqf16_vadd_vqf16vhffunction
1132core::core_arch::hexagon::v128q6_vqf16_vadd_vqf16vqf16function
1133core::core_arch::hexagon::v128q6_vqf16_vmpy_vhfvhffunction
1134core::core_arch::hexagon::v128q6_vqf16_vmpy_vqf16vhffunction
1135core::core_arch::hexagon::v128q6_vqf16_vmpy_vqf16vqf16function
1136core::core_arch::hexagon::v128q6_vqf16_vsub_vhfvhffunction
1137core::core_arch::hexagon::v128q6_vqf16_vsub_vqf16vhffunction
1138core::core_arch::hexagon::v128q6_vqf16_vsub_vqf16vqf16function
1139core::core_arch::hexagon::v128q6_vqf32_vadd_vqf32vqf32function
1140core::core_arch::hexagon::v128q6_vqf32_vadd_vqf32vsffunction
1141core::core_arch::hexagon::v128q6_vqf32_vadd_vsfvsffunction
1142core::core_arch::hexagon::v128q6_vqf32_vmpy_vqf32vqf32function
1143core::core_arch::hexagon::v128q6_vqf32_vmpy_vsfvsffunction
1144core::core_arch::hexagon::v128q6_vqf32_vsub_vqf32vqf32function
1145core::core_arch::hexagon::v128q6_vqf32_vsub_vqf32vsffunction
1146core::core_arch::hexagon::v128q6_vqf32_vsub_vsfvsffunction
1147core::core_arch::hexagon::v128q6_vscatter_qrmvhvfunction
1148core::core_arch::hexagon::v128q6_vscatter_qrmvwvfunction
1149core::core_arch::hexagon::v128q6_vscatter_qrmwwvfunction
1150core::core_arch::hexagon::v128q6_vscatter_rmvhvfunction
1151core::core_arch::hexagon::v128q6_vscatter_rmvwvfunction
1152core::core_arch::hexagon::v128q6_vscatter_rmwwvfunction
1153core::core_arch::hexagon::v128q6_vscatteracc_rmvhvfunction
1154core::core_arch::hexagon::v128q6_vscatteracc_rmvwvfunction
1155core::core_arch::hexagon::v128q6_vscatteracc_rmwwvfunction
1156core::core_arch::hexagon::v128q6_vsf_equals_vqf32function
1157core::core_arch::hexagon::v128q6_vsf_equals_vwfunction
1158core::core_arch::hexagon::v128q6_vsf_vabs_vsffunction
1159core::core_arch::hexagon::v128q6_vsf_vadd_vsfvsffunction
1160core::core_arch::hexagon::v128q6_vsf_vdmpy_vhfvhffunction
1161core::core_arch::hexagon::v128q6_vsf_vdmpyacc_vsfvhfvhffunction
1162core::core_arch::hexagon::v128q6_vsf_vfmax_vsfvsffunction
1163core::core_arch::hexagon::v128q6_vsf_vfmin_vsfvsffunction
1164core::core_arch::hexagon::v128q6_vsf_vfneg_vsffunction
1165core::core_arch::hexagon::v128q6_vsf_vmax_vsfvsffunction
1166core::core_arch::hexagon::v128q6_vsf_vmin_vsfvsffunction
1167core::core_arch::hexagon::v128q6_vsf_vmpy_vsfvsffunction
1168core::core_arch::hexagon::v128q6_vsf_vsub_vsfvsffunction
1169core::core_arch::hexagon::v128q6_vub_vabsdiff_vubvubfunction
1170core::core_arch::hexagon::v128q6_vub_vadd_vubvb_satfunction
1171core::core_arch::hexagon::v128q6_vub_vadd_vubvub_satfunction
1172core::core_arch::hexagon::v128q6_vub_vasr_vhvhr_rnd_satfunction
1173core::core_arch::hexagon::v128q6_vub_vasr_vhvhr_satfunction
1174core::core_arch::hexagon::v128q6_vub_vasr_vuhvuhr_rnd_satfunction
1175core::core_arch::hexagon::v128q6_vub_vasr_vuhvuhr_satfunction
1176core::core_arch::hexagon::v128q6_vub_vasr_wuhvub_rnd_satfunction
1177core::core_arch::hexagon::v128q6_vub_vasr_wuhvub_satfunction
1178core::core_arch::hexagon::v128q6_vub_vavg_vubvubfunction
1179core::core_arch::hexagon::v128q6_vub_vavg_vubvub_rndfunction
1180core::core_arch::hexagon::v128q6_vub_vcvt_vhfvhffunction
1181core::core_arch::hexagon::v128q6_vub_vlsr_vubrfunction
1182core::core_arch::hexagon::v128q6_vub_vmax_vubvubfunction
1183core::core_arch::hexagon::v128q6_vub_vmin_vubvubfunction
1184core::core_arch::hexagon::v128q6_vub_vpack_vhvh_satfunction
1185core::core_arch::hexagon::v128q6_vub_vround_vhvh_satfunction
1186core::core_arch::hexagon::v128q6_vub_vround_vuhvuh_satfunction
1187core::core_arch::hexagon::v128q6_vub_vsat_vhvhfunction
1188core::core_arch::hexagon::v128q6_vub_vsub_vubvb_satfunction
1189core::core_arch::hexagon::v128q6_vub_vsub_vubvub_satfunction
1190core::core_arch::hexagon::v128q6_vuh_vabsdiff_vhvhfunction
1191core::core_arch::hexagon::v128q6_vuh_vabsdiff_vuhvuhfunction
1192core::core_arch::hexagon::v128q6_vuh_vadd_vuhvuh_satfunction
1193core::core_arch::hexagon::v128q6_vuh_vasr_vuwvuwr_rnd_satfunction
1194core::core_arch::hexagon::v128q6_vuh_vasr_vuwvuwr_satfunction
1195core::core_arch::hexagon::v128q6_vuh_vasr_vwvwr_rnd_satfunction
1196core::core_arch::hexagon::v128q6_vuh_vasr_vwvwr_satfunction
1197core::core_arch::hexagon::v128q6_vuh_vasr_wwvuh_rnd_satfunction
1198core::core_arch::hexagon::v128q6_vuh_vasr_wwvuh_satfunction
1199core::core_arch::hexagon::v128q6_vuh_vavg_vuhvuhfunction
1200core::core_arch::hexagon::v128q6_vuh_vavg_vuhvuh_rndfunction
1201core::core_arch::hexagon::v128q6_vuh_vcl0_vuhfunction
1202core::core_arch::hexagon::v128q6_vuh_vcvt_vhffunction
1203core::core_arch::hexagon::v128q6_vuh_vlsr_vuhrfunction
1204core::core_arch::hexagon::v128q6_vuh_vmax_vuhvuhfunction
1205core::core_arch::hexagon::v128q6_vuh_vmin_vuhvuhfunction
1206core::core_arch::hexagon::v128q6_vuh_vmpy_vuhvuh_rs16function
1207core::core_arch::hexagon::v128q6_vuh_vpack_vwvw_satfunction
1208core::core_arch::hexagon::v128q6_vuh_vround_vuwvuw_satfunction
1209core::core_arch::hexagon::v128q6_vuh_vround_vwvw_satfunction
1210core::core_arch::hexagon::v128q6_vuh_vsat_vuwvuwfunction
1211core::core_arch::hexagon::v128q6_vuh_vsub_vuhvuh_satfunction
1212core::core_arch::hexagon::v128q6_vuw_vabsdiff_vwvwfunction
1213core::core_arch::hexagon::v128q6_vuw_vadd_vuwvuw_satfunction
1214core::core_arch::hexagon::v128q6_vuw_vavg_vuwvuwfunction
1215core::core_arch::hexagon::v128q6_vuw_vavg_vuwvuw_rndfunction
1216core::core_arch::hexagon::v128q6_vuw_vcl0_vuwfunction
1217core::core_arch::hexagon::v128q6_vuw_vlsr_vuwrfunction
1218core::core_arch::hexagon::v128q6_vuw_vmpye_vuhruhfunction
1219core::core_arch::hexagon::v128q6_vuw_vmpyeacc_vuwvuhruhfunction
1220core::core_arch::hexagon::v128q6_vuw_vrmpy_vubrubfunction
1221core::core_arch::hexagon::v128q6_vuw_vrmpy_vubvubfunction
1222core::core_arch::hexagon::v128q6_vuw_vrmpyacc_vuwvubrubfunction
1223core::core_arch::hexagon::v128q6_vuw_vrmpyacc_vuwvubvubfunction
1224core::core_arch::hexagon::v128q6_vuw_vrotr_vuwvuwfunction
1225core::core_arch::hexagon::v128q6_vuw_vsub_vuwvuw_satfunction
1226core::core_arch::hexagon::v128q6_vw_condacc_qnvwvwfunction
1227core::core_arch::hexagon::v128q6_vw_condacc_qvwvwfunction
1228core::core_arch::hexagon::v128q6_vw_condnac_qnvwvwfunction
1229core::core_arch::hexagon::v128q6_vw_condnac_qvwvwfunction
1230core::core_arch::hexagon::v128q6_vw_equals_vsffunction
1231core::core_arch::hexagon::v128q6_vw_prefixsum_qfunction
1232core::core_arch::hexagon::v128q6_vw_vabs_vwfunction
1233core::core_arch::hexagon::v128q6_vw_vabs_vw_satfunction
1234core::core_arch::hexagon::v128q6_vw_vadd_vclb_vwvwfunction
1235core::core_arch::hexagon::v128q6_vw_vadd_vwvwfunction
1236core::core_arch::hexagon::v128q6_vw_vadd_vwvw_satfunction
1237core::core_arch::hexagon::v128q6_vw_vadd_vwvwq_carry_satfunction
1238core::core_arch::hexagon::v128q6_vw_vasl_vwrfunction
1239core::core_arch::hexagon::v128q6_vw_vasl_vwvwfunction
1240core::core_arch::hexagon::v128q6_vw_vaslacc_vwvwrfunction
1241core::core_arch::hexagon::v128q6_vw_vasr_vwrfunction
1242core::core_arch::hexagon::v128q6_vw_vasr_vwvwfunction
1243core::core_arch::hexagon::v128q6_vw_vasracc_vwvwrfunction
1244core::core_arch::hexagon::v128q6_vw_vavg_vwvwfunction
1245core::core_arch::hexagon::v128q6_vw_vavg_vwvw_rndfunction
1246core::core_arch::hexagon::v128q6_vw_vdmpy_vhrbfunction
1247core::core_arch::hexagon::v128q6_vw_vdmpy_vhrh_satfunction
1248core::core_arch::hexagon::v128q6_vw_vdmpy_vhruh_satfunction
1249core::core_arch::hexagon::v128q6_vw_vdmpy_vhvh_satfunction
1250core::core_arch::hexagon::v128q6_vw_vdmpy_whrh_satfunction
1251core::core_arch::hexagon::v128q6_vw_vdmpy_whruh_satfunction
1252core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwvhrbfunction
1253core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwvhrh_satfunction
1254core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwvhruh_satfunction
1255core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwvhvh_satfunction
1256core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwwhrh_satfunction
1257core::core_arch::hexagon::v128q6_vw_vdmpyacc_vwwhruh_satfunction
1258core::core_arch::hexagon::v128q6_vw_vfmv_vwfunction
1259core::core_arch::hexagon::v128q6_vw_vinsert_vwrfunction
1260core::core_arch::hexagon::v128q6_vw_vlsr_vwvwfunction
1261core::core_arch::hexagon::v128q6_vw_vmax_vwvwfunction
1262core::core_arch::hexagon::v128q6_vw_vmin_vwvwfunction
1263core::core_arch::hexagon::v128q6_vw_vmpye_vwvuhfunction
1264core::core_arch::hexagon::v128q6_vw_vmpyi_vwrbfunction
1265core::core_arch::hexagon::v128q6_vw_vmpyi_vwrhfunction
1266core::core_arch::hexagon::v128q6_vw_vmpyi_vwrubfunction
1267core::core_arch::hexagon::v128q6_vw_vmpyiacc_vwvwrbfunction
1268core::core_arch::hexagon::v128q6_vw_vmpyiacc_vwvwrhfunction
1269core::core_arch::hexagon::v128q6_vw_vmpyiacc_vwvwrubfunction
1270core::core_arch::hexagon::v128q6_vw_vmpyie_vwvuhfunction
1271core::core_arch::hexagon::v128q6_vw_vmpyieacc_vwvwvhfunction
1272core::core_arch::hexagon::v128q6_vw_vmpyieacc_vwvwvuhfunction
1273core::core_arch::hexagon::v128q6_vw_vmpyieo_vhvhfunction
1274core::core_arch::hexagon::v128q6_vw_vmpyio_vwvhfunction
1275core::core_arch::hexagon::v128q6_vw_vmpyo_vwvh_s1_rnd_satfunction
1276core::core_arch::hexagon::v128q6_vw_vmpyo_vwvh_s1_satfunction
1277core::core_arch::hexagon::v128q6_vw_vmpyoacc_vwvwvh_s1_rnd_sat_shiftfunction
1278core::core_arch::hexagon::v128q6_vw_vmpyoacc_vwvwvh_s1_sat_shiftfunction
1279core::core_arch::hexagon::v128q6_vw_vnavg_vwvwfunction
1280core::core_arch::hexagon::v128q6_vw_vnormamt_vwfunction
1281core::core_arch::hexagon::v128q6_vw_vrmpy_vbvbfunction
1282core::core_arch::hexagon::v128q6_vw_vrmpy_vubrbfunction
1283core::core_arch::hexagon::v128q6_vw_vrmpy_vubvbfunction
1284core::core_arch::hexagon::v128q6_vw_vrmpyacc_vwvbvbfunction
1285core::core_arch::hexagon::v128q6_vw_vrmpyacc_vwvubrbfunction
1286core::core_arch::hexagon::v128q6_vw_vrmpyacc_vwvubvbfunction
1287core::core_arch::hexagon::v128q6_vw_vsatdw_vwvwfunction
1288core::core_arch::hexagon::v128q6_vw_vsub_vwvwfunction
1289core::core_arch::hexagon::v128q6_vw_vsub_vwvw_satfunction
1290core::core_arch::hexagon::v128q6_w_equals_wfunction
1291core::core_arch::hexagon::v128q6_w_vcombine_vvfunction
1292core::core_arch::hexagon::v128q6_w_vdeal_vvrfunction
1293core::core_arch::hexagon::v128q6_w_vmpye_vwvuhfunction
1294core::core_arch::hexagon::v128q6_w_vmpyoacc_wvwvhfunction
1295core::core_arch::hexagon::v128q6_w_vshuff_vvrfunction
1296core::core_arch::hexagon::v128q6_w_vswap_qvvfunction
1297core::core_arch::hexagon::v128q6_w_vzerofunction
1298core::core_arch::hexagon::v128q6_wb_vadd_wbwbfunction
1299core::core_arch::hexagon::v128q6_wb_vadd_wbwb_satfunction
1300core::core_arch::hexagon::v128q6_wb_vshuffoe_vbvbfunction
1301core::core_arch::hexagon::v128q6_wb_vsub_wbwbfunction
1302core::core_arch::hexagon::v128q6_wb_vsub_wbwb_satfunction
1303core::core_arch::hexagon::v128q6_wh_vadd_vubvubfunction
1304core::core_arch::hexagon::v128q6_wh_vadd_whwhfunction
1305core::core_arch::hexagon::v128q6_wh_vadd_whwh_satfunction
1306core::core_arch::hexagon::v128q6_wh_vaddacc_whvubvubfunction
1307core::core_arch::hexagon::v128q6_wh_vdmpy_wubrbfunction
1308core::core_arch::hexagon::v128q6_wh_vdmpyacc_whwubrbfunction
1309core::core_arch::hexagon::v128q6_wh_vlut16_vbvhifunction
1310core::core_arch::hexagon::v128q6_wh_vlut16_vbvhrfunction
1311core::core_arch::hexagon::v128q6_wh_vlut16_vbvhr_nomatchfunction
1312core::core_arch::hexagon::v128q6_wh_vlut16or_whvbvhifunction
1313core::core_arch::hexagon::v128q6_wh_vlut16or_whvbvhrfunction
1314core::core_arch::hexagon::v128q6_wh_vmpa_wubrbfunction
1315core::core_arch::hexagon::v128q6_wh_vmpa_wubrubfunction
1316core::core_arch::hexagon::v128q6_wh_vmpa_wubwbfunction
1317core::core_arch::hexagon::v128q6_wh_vmpa_wubwubfunction
1318core::core_arch::hexagon::v128q6_wh_vmpaacc_whwubrbfunction
1319core::core_arch::hexagon::v128q6_wh_vmpaacc_whwubrubfunction
1320core::core_arch::hexagon::v128q6_wh_vmpy_vbvbfunction
1321core::core_arch::hexagon::v128q6_wh_vmpy_vubrbfunction
1322core::core_arch::hexagon::v128q6_wh_vmpy_vubvbfunction
1323core::core_arch::hexagon::v128q6_wh_vmpyacc_whvbvbfunction
1324core::core_arch::hexagon::v128q6_wh_vmpyacc_whvubrbfunction
1325core::core_arch::hexagon::v128q6_wh_vmpyacc_whvubvbfunction
1326core::core_arch::hexagon::v128q6_wh_vshuffoe_vhvhfunction
1327core::core_arch::hexagon::v128q6_wh_vsub_vubvubfunction
1328core::core_arch::hexagon::v128q6_wh_vsub_whwhfunction
1329core::core_arch::hexagon::v128q6_wh_vsub_whwh_satfunction
1330core::core_arch::hexagon::v128q6_wh_vsxt_vbfunction
1331core::core_arch::hexagon::v128q6_wh_vtmpy_wbrbfunction
1332core::core_arch::hexagon::v128q6_wh_vtmpy_wubrbfunction
1333core::core_arch::hexagon::v128q6_wh_vtmpyacc_whwbrbfunction
1334core::core_arch::hexagon::v128q6_wh_vtmpyacc_whwubrbfunction
1335core::core_arch::hexagon::v128q6_wh_vunpack_vbfunction
1336core::core_arch::hexagon::v128q6_wh_vunpackoor_whvbfunction
1337core::core_arch::hexagon::v128q6_whf_vcvt2_vbfunction
1338core::core_arch::hexagon::v128q6_whf_vcvt2_vubfunction
1339core::core_arch::hexagon::v128q6_whf_vcvt_vfunction
1340core::core_arch::hexagon::v128q6_whf_vcvt_vbfunction
1341core::core_arch::hexagon::v128q6_whf_vcvt_vubfunction
1342core::core_arch::hexagon::v128q6_wqf32_vmpy_vhfvhffunction
1343core::core_arch::hexagon::v128q6_wqf32_vmpy_vqf16vhffunction
1344core::core_arch::hexagon::v128q6_wqf32_vmpy_vqf16vqf16function
1345core::core_arch::hexagon::v128q6_wsf_vadd_vhfvhffunction
1346core::core_arch::hexagon::v128q6_wsf_vcvt_vhffunction
1347core::core_arch::hexagon::v128q6_wsf_vmpy_vhfvhffunction
1348core::core_arch::hexagon::v128q6_wsf_vmpyacc_wsfvhfvhffunction
1349core::core_arch::hexagon::v128q6_wsf_vsub_vhfvhffunction
1350core::core_arch::hexagon::v128q6_wub_vadd_wubwub_satfunction
1351core::core_arch::hexagon::v128q6_wub_vsub_wubwub_satfunction
1352core::core_arch::hexagon::v128q6_wuh_vadd_wuhwuh_satfunction
1353core::core_arch::hexagon::v128q6_wuh_vmpy_vubrubfunction
1354core::core_arch::hexagon::v128q6_wuh_vmpy_vubvubfunction
1355core::core_arch::hexagon::v128q6_wuh_vmpyacc_wuhvubrubfunction
1356core::core_arch::hexagon::v128q6_wuh_vmpyacc_wuhvubvubfunction
1357core::core_arch::hexagon::v128q6_wuh_vsub_wuhwuh_satfunction
1358core::core_arch::hexagon::v128q6_wuh_vunpack_vubfunction
1359core::core_arch::hexagon::v128q6_wuh_vzxt_vubfunction
1360core::core_arch::hexagon::v128q6_wuw_vadd_wuwwuw_satfunction
1361core::core_arch::hexagon::v128q6_wuw_vdsad_wuhruhfunction
1362core::core_arch::hexagon::v128q6_wuw_vdsadacc_wuwwuhruhfunction
1363core::core_arch::hexagon::v128q6_wuw_vmpy_vuhruhfunction
1364core::core_arch::hexagon::v128q6_wuw_vmpy_vuhvuhfunction
1365core::core_arch::hexagon::v128q6_wuw_vmpyacc_wuwvuhruhfunction
1366core::core_arch::hexagon::v128q6_wuw_vmpyacc_wuwvuhvuhfunction
1367core::core_arch::hexagon::v128q6_wuw_vrmpy_wubrubifunction
1368core::core_arch::hexagon::v128q6_wuw_vrmpyacc_wuwwubrubifunction
1369core::core_arch::hexagon::v128q6_wuw_vrsad_wubrubifunction
1370core::core_arch::hexagon::v128q6_wuw_vrsadacc_wuwwubrubifunction
1371core::core_arch::hexagon::v128q6_wuw_vsub_wuwwuw_satfunction
1372core::core_arch::hexagon::v128q6_wuw_vunpack_vuhfunction
1373core::core_arch::hexagon::v128q6_wuw_vzxt_vuhfunction
1374core::core_arch::hexagon::v128q6_ww_v6mpy_wubwbi_hfunction
1375core::core_arch::hexagon::v128q6_ww_v6mpy_wubwbi_vfunction
1376core::core_arch::hexagon::v128q6_ww_v6mpyacc_wwwubwbi_hfunction
1377core::core_arch::hexagon::v128q6_ww_v6mpyacc_wwwubwbi_vfunction
1378core::core_arch::hexagon::v128q6_ww_vadd_vhvhfunction
1379core::core_arch::hexagon::v128q6_ww_vadd_vuhvuhfunction
1380core::core_arch::hexagon::v128q6_ww_vadd_wwwwfunction
1381core::core_arch::hexagon::v128q6_ww_vadd_wwww_satfunction
1382core::core_arch::hexagon::v128q6_ww_vaddacc_wwvhvhfunction
1383core::core_arch::hexagon::v128q6_ww_vaddacc_wwvuhvuhfunction
1384core::core_arch::hexagon::v128q6_ww_vasrinto_wwvwvwfunction
1385core::core_arch::hexagon::v128q6_ww_vdmpy_whrbfunction
1386core::core_arch::hexagon::v128q6_ww_vdmpyacc_wwwhrbfunction
1387core::core_arch::hexagon::v128q6_ww_vmpa_whrbfunction
1388core::core_arch::hexagon::v128q6_ww_vmpa_wuhrbfunction
1389core::core_arch::hexagon::v128q6_ww_vmpaacc_wwwhrbfunction
1390core::core_arch::hexagon::v128q6_ww_vmpaacc_wwwuhrbfunction
1391core::core_arch::hexagon::v128q6_ww_vmpy_vhrhfunction
1392core::core_arch::hexagon::v128q6_ww_vmpy_vhvhfunction
1393core::core_arch::hexagon::v128q6_ww_vmpy_vhvuhfunction
1394core::core_arch::hexagon::v128q6_ww_vmpyacc_wwvhrhfunction
1395core::core_arch::hexagon::v128q6_ww_vmpyacc_wwvhrh_satfunction
1396core::core_arch::hexagon::v128q6_ww_vmpyacc_wwvhvhfunction
1397core::core_arch::hexagon::v128q6_ww_vmpyacc_wwvhvuhfunction
1398core::core_arch::hexagon::v128q6_ww_vrmpy_wubrbifunction
1399core::core_arch::hexagon::v128q6_ww_vrmpyacc_wwwubrbifunction
1400core::core_arch::hexagon::v128q6_ww_vsub_vhvhfunction
1401core::core_arch::hexagon::v128q6_ww_vsub_vuhvuhfunction
1402core::core_arch::hexagon::v128q6_ww_vsub_wwwwfunction
1403core::core_arch::hexagon::v128q6_ww_vsub_wwww_satfunction
1404core::core_arch::hexagon::v128q6_ww_vsxt_vhfunction
1405core::core_arch::hexagon::v128q6_ww_vtmpy_whrbfunction
1406core::core_arch::hexagon::v128q6_ww_vtmpyacc_wwwhrbfunction
1407core::core_arch::hexagon::v128q6_ww_vunpack_vhfunction
1408core::core_arch::hexagon::v128q6_ww_vunpackoor_wwvhfunction
1409core::core_arch::hexagon::v64q6_q_and_qqfunction
1410core::core_arch::hexagon::v64q6_q_and_qqnfunction
1411core::core_arch::hexagon::v64q6_q_not_qfunction
1412core::core_arch::hexagon::v64q6_q_or_qqfunction
1413core::core_arch::hexagon::v64q6_q_or_qqnfunction
1414core::core_arch::hexagon::v64q6_q_vand_vrfunction
1415core::core_arch::hexagon::v64q6_q_vandor_qvrfunction
1416core::core_arch::hexagon::v64q6_q_vcmp_eq_vbvbfunction
1417core::core_arch::hexagon::v64q6_q_vcmp_eq_vhvhfunction
1418core::core_arch::hexagon::v64q6_q_vcmp_eq_vwvwfunction
1419core::core_arch::hexagon::v64q6_q_vcmp_eqand_qvbvbfunction
1420core::core_arch::hexagon::v64q6_q_vcmp_eqand_qvhvhfunction
1421core::core_arch::hexagon::v64q6_q_vcmp_eqand_qvwvwfunction
1422core::core_arch::hexagon::v64q6_q_vcmp_eqor_qvbvbfunction
1423core::core_arch::hexagon::v64q6_q_vcmp_eqor_qvhvhfunction
1424core::core_arch::hexagon::v64q6_q_vcmp_eqor_qvwvwfunction
1425core::core_arch::hexagon::v64q6_q_vcmp_eqxacc_qvbvbfunction
1426core::core_arch::hexagon::v64q6_q_vcmp_eqxacc_qvhvhfunction
1427core::core_arch::hexagon::v64q6_q_vcmp_eqxacc_qvwvwfunction
1428core::core_arch::hexagon::v64q6_q_vcmp_gt_vbvbfunction
1429core::core_arch::hexagon::v64q6_q_vcmp_gt_vhfvhffunction
1430core::core_arch::hexagon::v64q6_q_vcmp_gt_vhvhfunction
1431core::core_arch::hexagon::v64q6_q_vcmp_gt_vsfvsffunction
1432core::core_arch::hexagon::v64q6_q_vcmp_gt_vubvubfunction
1433core::core_arch::hexagon::v64q6_q_vcmp_gt_vuhvuhfunction
1434core::core_arch::hexagon::v64q6_q_vcmp_gt_vuwvuwfunction
1435core::core_arch::hexagon::v64q6_q_vcmp_gt_vwvwfunction
1436core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvbvbfunction
1437core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvhfvhffunction
1438core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvhvhfunction
1439core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvsfvsffunction
1440core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvubvubfunction
1441core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvuhvuhfunction
1442core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvuwvuwfunction
1443core::core_arch::hexagon::v64q6_q_vcmp_gtand_qvwvwfunction
1444core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvbvbfunction
1445core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvhfvhffunction
1446core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvhvhfunction
1447core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvsfvsffunction
1448core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvubvubfunction
1449core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvuhvuhfunction
1450core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvuwvuwfunction
1451core::core_arch::hexagon::v64q6_q_vcmp_gtor_qvwvwfunction
1452core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvbvbfunction
1453core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvhfvhffunction
1454core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvhvhfunction
1455core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvsfvsffunction
1456core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvubvubfunction
1457core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvuhvuhfunction
1458core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvuwvuwfunction
1459core::core_arch::hexagon::v64q6_q_vcmp_gtxacc_qvwvwfunction
1460core::core_arch::hexagon::v64q6_q_vsetq2_rfunction
1461core::core_arch::hexagon::v64q6_q_vsetq_rfunction
1462core::core_arch::hexagon::v64q6_q_xor_qqfunction
1463core::core_arch::hexagon::v64q6_qb_vshuffe_qhqhfunction
1464core::core_arch::hexagon::v64q6_qh_vshuffe_qwqwfunction
1465core::core_arch::hexagon::v64q6_r_vextract_vrfunction
1466core::core_arch::hexagon::v64q6_v_equals_vfunction
1467core::core_arch::hexagon::v64q6_v_hi_wfunction
1468core::core_arch::hexagon::v64q6_v_lo_wfunction
1469core::core_arch::hexagon::v64q6_v_vabs_vfunction
1470core::core_arch::hexagon::v64q6_v_valign_vvifunction
1471core::core_arch::hexagon::v64q6_v_valign_vvrfunction
1472core::core_arch::hexagon::v64q6_v_vand_qnrfunction
1473core::core_arch::hexagon::v64q6_v_vand_qnvfunction
1474core::core_arch::hexagon::v64q6_v_vand_qrfunction
1475core::core_arch::hexagon::v64q6_v_vand_qvfunction
1476core::core_arch::hexagon::v64q6_v_vand_vvfunction
1477core::core_arch::hexagon::v64q6_v_vandor_vqnrfunction
1478core::core_arch::hexagon::v64q6_v_vandor_vqrfunction
1479core::core_arch::hexagon::v64q6_v_vdelta_vvfunction
1480core::core_arch::hexagon::v64q6_v_vfmax_vvfunction
1481core::core_arch::hexagon::v64q6_v_vfmin_vvfunction
1482core::core_arch::hexagon::v64q6_v_vfneg_vfunction
1483core::core_arch::hexagon::v64q6_v_vgetqfext_vrfunction
1484core::core_arch::hexagon::v64q6_v_vlalign_vvifunction
1485core::core_arch::hexagon::v64q6_v_vlalign_vvrfunction
1486core::core_arch::hexagon::v64q6_v_vmux_qvvfunction
1487core::core_arch::hexagon::v64q6_v_vnot_vfunction
1488core::core_arch::hexagon::v64q6_v_vor_vvfunction
1489core::core_arch::hexagon::v64q6_v_vrdelta_vvfunction
1490core::core_arch::hexagon::v64q6_v_vror_vrfunction
1491core::core_arch::hexagon::v64q6_v_vsetqfext_vrfunction
1492core::core_arch::hexagon::v64q6_v_vsplat_rfunction
1493core::core_arch::hexagon::v64q6_v_vxor_vvfunction
1494core::core_arch::hexagon::v64q6_v_vzerofunction
1495core::core_arch::hexagon::v64q6_vb_condacc_qnvbvbfunction
1496core::core_arch::hexagon::v64q6_vb_condacc_qvbvbfunction
1497core::core_arch::hexagon::v64q6_vb_condnac_qnvbvbfunction
1498core::core_arch::hexagon::v64q6_vb_condnac_qvbvbfunction
1499core::core_arch::hexagon::v64q6_vb_prefixsum_qfunction
1500core::core_arch::hexagon::v64q6_vb_vabs_vbfunction
1501core::core_arch::hexagon::v64q6_vb_vabs_vb_satfunction
1502core::core_arch::hexagon::v64q6_vb_vadd_vbvbfunction
1503core::core_arch::hexagon::v64q6_vb_vadd_vbvb_satfunction
1504core::core_arch::hexagon::v64q6_vb_vasr_vhvhr_rnd_satfunction
1505core::core_arch::hexagon::v64q6_vb_vasr_vhvhr_satfunction
1506core::core_arch::hexagon::v64q6_vb_vavg_vbvbfunction
1507core::core_arch::hexagon::v64q6_vb_vavg_vbvb_rndfunction
1508core::core_arch::hexagon::v64q6_vb_vcvt_vhfvhffunction
1509core::core_arch::hexagon::v64q6_vb_vdeal_vbfunction
1510core::core_arch::hexagon::v64q6_vb_vdeale_vbvbfunction
1511core::core_arch::hexagon::v64q6_vb_vlut32_vbvbifunction
1512core::core_arch::hexagon::v64q6_vb_vlut32_vbvbrfunction
1513core::core_arch::hexagon::v64q6_vb_vlut32_vbvbr_nomatchfunction
1514core::core_arch::hexagon::v64q6_vb_vlut32or_vbvbvbifunction
1515core::core_arch::hexagon::v64q6_vb_vlut32or_vbvbvbrfunction
1516core::core_arch::hexagon::v64q6_vb_vmax_vbvbfunction
1517core::core_arch::hexagon::v64q6_vb_vmin_vbvbfunction
1518core::core_arch::hexagon::v64q6_vb_vnavg_vbvbfunction
1519core::core_arch::hexagon::v64q6_vb_vnavg_vubvubfunction
1520core::core_arch::hexagon::v64q6_vb_vpack_vhvh_satfunction
1521core::core_arch::hexagon::v64q6_vb_vpacke_vhvhfunction
1522core::core_arch::hexagon::v64q6_vb_vpacko_vhvhfunction
1523core::core_arch::hexagon::v64q6_vb_vround_vhvh_satfunction
1524core::core_arch::hexagon::v64q6_vb_vshuff_vbfunction
1525core::core_arch::hexagon::v64q6_vb_vshuffe_vbvbfunction
1526core::core_arch::hexagon::v64q6_vb_vshuffo_vbvbfunction
1527core::core_arch::hexagon::v64q6_vb_vsplat_rfunction
1528core::core_arch::hexagon::v64q6_vb_vsub_vbvbfunction
1529core::core_arch::hexagon::v64q6_vb_vsub_vbvb_satfunction
1530core::core_arch::hexagon::v64q6_vgather_aqrmvhfunction
1531core::core_arch::hexagon::v64q6_vgather_aqrmvwfunction
1532core::core_arch::hexagon::v64q6_vgather_aqrmwwfunction
1533core::core_arch::hexagon::v64q6_vgather_armvhfunction
1534core::core_arch::hexagon::v64q6_vgather_armvwfunction
1535core::core_arch::hexagon::v64q6_vgather_armwwfunction
1536core::core_arch::hexagon::v64q6_vh_condacc_qnvhvhfunction
1537core::core_arch::hexagon::v64q6_vh_condacc_qvhvhfunction
1538core::core_arch::hexagon::v64q6_vh_condnac_qnvhvhfunction
1539core::core_arch::hexagon::v64q6_vh_condnac_qvhvhfunction
1540core::core_arch::hexagon::v64q6_vh_equals_vhffunction
1541core::core_arch::hexagon::v64q6_vh_prefixsum_qfunction
1542core::core_arch::hexagon::v64q6_vh_vabs_vhfunction
1543core::core_arch::hexagon::v64q6_vh_vabs_vh_satfunction
1544core::core_arch::hexagon::v64q6_vh_vadd_vclb_vhvhfunction
1545core::core_arch::hexagon::v64q6_vh_vadd_vhvhfunction
1546core::core_arch::hexagon::v64q6_vh_vadd_vhvh_satfunction
1547core::core_arch::hexagon::v64q6_vh_vasl_vhrfunction
1548core::core_arch::hexagon::v64q6_vh_vasl_vhvhfunction
1549core::core_arch::hexagon::v64q6_vh_vaslacc_vhvhrfunction
1550core::core_arch::hexagon::v64q6_vh_vasr_vhrfunction
1551core::core_arch::hexagon::v64q6_vh_vasr_vhvhfunction
1552core::core_arch::hexagon::v64q6_vh_vasr_vwvwrfunction
1553core::core_arch::hexagon::v64q6_vh_vasr_vwvwr_rnd_satfunction
1554core::core_arch::hexagon::v64q6_vh_vasr_vwvwr_satfunction
1555core::core_arch::hexagon::v64q6_vh_vasracc_vhvhrfunction
1556core::core_arch::hexagon::v64q6_vh_vavg_vhvhfunction
1557core::core_arch::hexagon::v64q6_vh_vavg_vhvh_rndfunction
1558core::core_arch::hexagon::v64q6_vh_vcvt_vhffunction
1559core::core_arch::hexagon::v64q6_vh_vdeal_vhfunction
1560core::core_arch::hexagon::v64q6_vh_vdmpy_vubrbfunction
1561core::core_arch::hexagon::v64q6_vh_vdmpyacc_vhvubrbfunction
1562core::core_arch::hexagon::v64q6_vh_vlsr_vhvhfunction
1563core::core_arch::hexagon::v64q6_vh_vmax_vhvhfunction
1564core::core_arch::hexagon::v64q6_vh_vmin_vhvhfunction
1565core::core_arch::hexagon::v64q6_vh_vmpy_vhrh_s1_rnd_satfunction
1566core::core_arch::hexagon::v64q6_vh_vmpy_vhrh_s1_satfunction
1567core::core_arch::hexagon::v64q6_vh_vmpy_vhvh_s1_rnd_satfunction
1568core::core_arch::hexagon::v64q6_vh_vmpyi_vhrbfunction
1569core::core_arch::hexagon::v64q6_vh_vmpyi_vhvhfunction
1570core::core_arch::hexagon::v64q6_vh_vmpyiacc_vhvhrbfunction
1571core::core_arch::hexagon::v64q6_vh_vmpyiacc_vhvhvhfunction
1572core::core_arch::hexagon::v64q6_vh_vnavg_vhvhfunction
1573core::core_arch::hexagon::v64q6_vh_vnormamt_vhfunction
1574core::core_arch::hexagon::v64q6_vh_vpack_vwvw_satfunction
1575core::core_arch::hexagon::v64q6_vh_vpacke_vwvwfunction
1576core::core_arch::hexagon::v64q6_vh_vpacko_vwvwfunction
1577core::core_arch::hexagon::v64q6_vh_vpopcount_vhfunction
1578core::core_arch::hexagon::v64q6_vh_vround_vwvw_satfunction
1579core::core_arch::hexagon::v64q6_vh_vsat_vwvwfunction
1580core::core_arch::hexagon::v64q6_vh_vshuff_vhfunction
1581core::core_arch::hexagon::v64q6_vh_vshuffe_vhvhfunction
1582core::core_arch::hexagon::v64q6_vh_vshuffo_vhvhfunction
1583core::core_arch::hexagon::v64q6_vh_vsplat_rfunction
1584core::core_arch::hexagon::v64q6_vh_vsub_vhvhfunction
1585core::core_arch::hexagon::v64q6_vh_vsub_vhvh_satfunction
1586core::core_arch::hexagon::v64q6_vhf_equals_vhfunction
1587core::core_arch::hexagon::v64q6_vhf_equals_vqf16function
1588core::core_arch::hexagon::v64q6_vhf_equals_wqf32function
1589core::core_arch::hexagon::v64q6_vhf_vabs_vhffunction
1590core::core_arch::hexagon::v64q6_vhf_vadd_vhfvhffunction
1591core::core_arch::hexagon::v64q6_vhf_vcvt_vhfunction
1592core::core_arch::hexagon::v64q6_vhf_vcvt_vsfvsffunction
1593core::core_arch::hexagon::v64q6_vhf_vcvt_vuhfunction
1594core::core_arch::hexagon::v64q6_vhf_vfmax_vhfvhffunction
1595core::core_arch::hexagon::v64q6_vhf_vfmin_vhfvhffunction
1596core::core_arch::hexagon::v64q6_vhf_vfneg_vhffunction
1597core::core_arch::hexagon::v64q6_vhf_vmax_vhfvhffunction
1598core::core_arch::hexagon::v64q6_vhf_vmin_vhfvhffunction
1599core::core_arch::hexagon::v64q6_vhf_vmpy_vhfvhffunction
1600core::core_arch::hexagon::v64q6_vhf_vmpyacc_vhfvhfvhffunction
1601core::core_arch::hexagon::v64q6_vhf_vsub_vhfvhffunction
1602core::core_arch::hexagon::v64q6_vmem_qnrivfunction
1603core::core_arch::hexagon::v64q6_vmem_qnriv_ntfunction
1604core::core_arch::hexagon::v64q6_vmem_qrivfunction
1605core::core_arch::hexagon::v64q6_vmem_qriv_ntfunction
1606core::core_arch::hexagon::v64q6_vqf16_vadd_vhfvhffunction
1607core::core_arch::hexagon::v64q6_vqf16_vadd_vqf16vhffunction
1608core::core_arch::hexagon::v64q6_vqf16_vadd_vqf16vqf16function
1609core::core_arch::hexagon::v64q6_vqf16_vmpy_vhfvhffunction
1610core::core_arch::hexagon::v64q6_vqf16_vmpy_vqf16vhffunction
1611core::core_arch::hexagon::v64q6_vqf16_vmpy_vqf16vqf16function
1612core::core_arch::hexagon::v64q6_vqf16_vsub_vhfvhffunction
1613core::core_arch::hexagon::v64q6_vqf16_vsub_vqf16vhffunction
1614core::core_arch::hexagon::v64q6_vqf16_vsub_vqf16vqf16function
1615core::core_arch::hexagon::v64q6_vqf32_vadd_vqf32vqf32function
1616core::core_arch::hexagon::v64q6_vqf32_vadd_vqf32vsffunction
1617core::core_arch::hexagon::v64q6_vqf32_vadd_vsfvsffunction
1618core::core_arch::hexagon::v64q6_vqf32_vmpy_vqf32vqf32function
1619core::core_arch::hexagon::v64q6_vqf32_vmpy_vsfvsffunction
1620core::core_arch::hexagon::v64q6_vqf32_vsub_vqf32vqf32function
1621core::core_arch::hexagon::v64q6_vqf32_vsub_vqf32vsffunction
1622core::core_arch::hexagon::v64q6_vqf32_vsub_vsfvsffunction
1623core::core_arch::hexagon::v64q6_vscatter_qrmvhvfunction
1624core::core_arch::hexagon::v64q6_vscatter_qrmvwvfunction
1625core::core_arch::hexagon::v64q6_vscatter_qrmwwvfunction
1626core::core_arch::hexagon::v64q6_vscatter_rmvhvfunction
1627core::core_arch::hexagon::v64q6_vscatter_rmvwvfunction
1628core::core_arch::hexagon::v64q6_vscatter_rmwwvfunction
1629core::core_arch::hexagon::v64q6_vscatteracc_rmvhvfunction
1630core::core_arch::hexagon::v64q6_vscatteracc_rmvwvfunction
1631core::core_arch::hexagon::v64q6_vscatteracc_rmwwvfunction
1632core::core_arch::hexagon::v64q6_vsf_equals_vqf32function
1633core::core_arch::hexagon::v64q6_vsf_equals_vwfunction
1634core::core_arch::hexagon::v64q6_vsf_vabs_vsffunction
1635core::core_arch::hexagon::v64q6_vsf_vadd_vsfvsffunction
1636core::core_arch::hexagon::v64q6_vsf_vdmpy_vhfvhffunction
1637core::core_arch::hexagon::v64q6_vsf_vdmpyacc_vsfvhfvhffunction
1638core::core_arch::hexagon::v64q6_vsf_vfmax_vsfvsffunction
1639core::core_arch::hexagon::v64q6_vsf_vfmin_vsfvsffunction
1640core::core_arch::hexagon::v64q6_vsf_vfneg_vsffunction
1641core::core_arch::hexagon::v64q6_vsf_vmax_vsfvsffunction
1642core::core_arch::hexagon::v64q6_vsf_vmin_vsfvsffunction
1643core::core_arch::hexagon::v64q6_vsf_vmpy_vsfvsffunction
1644core::core_arch::hexagon::v64q6_vsf_vsub_vsfvsffunction
1645core::core_arch::hexagon::v64q6_vub_vabsdiff_vubvubfunction
1646core::core_arch::hexagon::v64q6_vub_vadd_vubvb_satfunction
1647core::core_arch::hexagon::v64q6_vub_vadd_vubvub_satfunction
1648core::core_arch::hexagon::v64q6_vub_vasr_vhvhr_rnd_satfunction
1649core::core_arch::hexagon::v64q6_vub_vasr_vhvhr_satfunction
1650core::core_arch::hexagon::v64q6_vub_vasr_vuhvuhr_rnd_satfunction
1651core::core_arch::hexagon::v64q6_vub_vasr_vuhvuhr_satfunction
1652core::core_arch::hexagon::v64q6_vub_vasr_wuhvub_rnd_satfunction
1653core::core_arch::hexagon::v64q6_vub_vasr_wuhvub_satfunction
1654core::core_arch::hexagon::v64q6_vub_vavg_vubvubfunction
1655core::core_arch::hexagon::v64q6_vub_vavg_vubvub_rndfunction
1656core::core_arch::hexagon::v64q6_vub_vcvt_vhfvhffunction
1657core::core_arch::hexagon::v64q6_vub_vlsr_vubrfunction
1658core::core_arch::hexagon::v64q6_vub_vmax_vubvubfunction
1659core::core_arch::hexagon::v64q6_vub_vmin_vubvubfunction
1660core::core_arch::hexagon::v64q6_vub_vpack_vhvh_satfunction
1661core::core_arch::hexagon::v64q6_vub_vround_vhvh_satfunction
1662core::core_arch::hexagon::v64q6_vub_vround_vuhvuh_satfunction
1663core::core_arch::hexagon::v64q6_vub_vsat_vhvhfunction
1664core::core_arch::hexagon::v64q6_vub_vsub_vubvb_satfunction
1665core::core_arch::hexagon::v64q6_vub_vsub_vubvub_satfunction
1666core::core_arch::hexagon::v64q6_vuh_vabsdiff_vhvhfunction
1667core::core_arch::hexagon::v64q6_vuh_vabsdiff_vuhvuhfunction
1668core::core_arch::hexagon::v64q6_vuh_vadd_vuhvuh_satfunction
1669core::core_arch::hexagon::v64q6_vuh_vasr_vuwvuwr_rnd_satfunction
1670core::core_arch::hexagon::v64q6_vuh_vasr_vuwvuwr_satfunction
1671core::core_arch::hexagon::v64q6_vuh_vasr_vwvwr_rnd_satfunction
1672core::core_arch::hexagon::v64q6_vuh_vasr_vwvwr_satfunction
1673core::core_arch::hexagon::v64q6_vuh_vasr_wwvuh_rnd_satfunction
1674core::core_arch::hexagon::v64q6_vuh_vasr_wwvuh_satfunction
1675core::core_arch::hexagon::v64q6_vuh_vavg_vuhvuhfunction
1676core::core_arch::hexagon::v64q6_vuh_vavg_vuhvuh_rndfunction
1677core::core_arch::hexagon::v64q6_vuh_vcl0_vuhfunction
1678core::core_arch::hexagon::v64q6_vuh_vcvt_vhffunction
1679core::core_arch::hexagon::v64q6_vuh_vlsr_vuhrfunction
1680core::core_arch::hexagon::v64q6_vuh_vmax_vuhvuhfunction
1681core::core_arch::hexagon::v64q6_vuh_vmin_vuhvuhfunction
1682core::core_arch::hexagon::v64q6_vuh_vmpy_vuhvuh_rs16function
1683core::core_arch::hexagon::v64q6_vuh_vpack_vwvw_satfunction
1684core::core_arch::hexagon::v64q6_vuh_vround_vuwvuw_satfunction
1685core::core_arch::hexagon::v64q6_vuh_vround_vwvw_satfunction
1686core::core_arch::hexagon::v64q6_vuh_vsat_vuwvuwfunction
1687core::core_arch::hexagon::v64q6_vuh_vsub_vuhvuh_satfunction
1688core::core_arch::hexagon::v64q6_vuw_vabsdiff_vwvwfunction
1689core::core_arch::hexagon::v64q6_vuw_vadd_vuwvuw_satfunction
1690core::core_arch::hexagon::v64q6_vuw_vavg_vuwvuwfunction
1691core::core_arch::hexagon::v64q6_vuw_vavg_vuwvuw_rndfunction
1692core::core_arch::hexagon::v64q6_vuw_vcl0_vuwfunction
1693core::core_arch::hexagon::v64q6_vuw_vlsr_vuwrfunction
1694core::core_arch::hexagon::v64q6_vuw_vmpye_vuhruhfunction
1695core::core_arch::hexagon::v64q6_vuw_vmpyeacc_vuwvuhruhfunction
1696core::core_arch::hexagon::v64q6_vuw_vrmpy_vubrubfunction
1697core::core_arch::hexagon::v64q6_vuw_vrmpy_vubvubfunction
1698core::core_arch::hexagon::v64q6_vuw_vrmpyacc_vuwvubrubfunction
1699core::core_arch::hexagon::v64q6_vuw_vrmpyacc_vuwvubvubfunction
1700core::core_arch::hexagon::v64q6_vuw_vrotr_vuwvuwfunction
1701core::core_arch::hexagon::v64q6_vuw_vsub_vuwvuw_satfunction
1702core::core_arch::hexagon::v64q6_vw_condacc_qnvwvwfunction
1703core::core_arch::hexagon::v64q6_vw_condacc_qvwvwfunction
1704core::core_arch::hexagon::v64q6_vw_condnac_qnvwvwfunction
1705core::core_arch::hexagon::v64q6_vw_condnac_qvwvwfunction
1706core::core_arch::hexagon::v64q6_vw_equals_vsffunction
1707core::core_arch::hexagon::v64q6_vw_prefixsum_qfunction
1708core::core_arch::hexagon::v64q6_vw_vabs_vwfunction
1709core::core_arch::hexagon::v64q6_vw_vabs_vw_satfunction
1710core::core_arch::hexagon::v64q6_vw_vadd_vclb_vwvwfunction
1711core::core_arch::hexagon::v64q6_vw_vadd_vwvwfunction
1712core::core_arch::hexagon::v64q6_vw_vadd_vwvw_satfunction
1713core::core_arch::hexagon::v64q6_vw_vadd_vwvwq_carry_satfunction
1714core::core_arch::hexagon::v64q6_vw_vasl_vwrfunction
1715core::core_arch::hexagon::v64q6_vw_vasl_vwvwfunction
1716core::core_arch::hexagon::v64q6_vw_vaslacc_vwvwrfunction
1717core::core_arch::hexagon::v64q6_vw_vasr_vwrfunction
1718core::core_arch::hexagon::v64q6_vw_vasr_vwvwfunction
1719core::core_arch::hexagon::v64q6_vw_vasracc_vwvwrfunction
1720core::core_arch::hexagon::v64q6_vw_vavg_vwvwfunction
1721core::core_arch::hexagon::v64q6_vw_vavg_vwvw_rndfunction
1722core::core_arch::hexagon::v64q6_vw_vdmpy_vhrbfunction
1723core::core_arch::hexagon::v64q6_vw_vdmpy_vhrh_satfunction
1724core::core_arch::hexagon::v64q6_vw_vdmpy_vhruh_satfunction
1725core::core_arch::hexagon::v64q6_vw_vdmpy_vhvh_satfunction
1726core::core_arch::hexagon::v64q6_vw_vdmpy_whrh_satfunction
1727core::core_arch::hexagon::v64q6_vw_vdmpy_whruh_satfunction
1728core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwvhrbfunction
1729core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwvhrh_satfunction
1730core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwvhruh_satfunction
1731core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwvhvh_satfunction
1732core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwwhrh_satfunction
1733core::core_arch::hexagon::v64q6_vw_vdmpyacc_vwwhruh_satfunction
1734core::core_arch::hexagon::v64q6_vw_vfmv_vwfunction
1735core::core_arch::hexagon::v64q6_vw_vinsert_vwrfunction
1736core::core_arch::hexagon::v64q6_vw_vlsr_vwvwfunction
1737core::core_arch::hexagon::v64q6_vw_vmax_vwvwfunction
1738core::core_arch::hexagon::v64q6_vw_vmin_vwvwfunction
1739core::core_arch::hexagon::v64q6_vw_vmpye_vwvuhfunction
1740core::core_arch::hexagon::v64q6_vw_vmpyi_vwrbfunction
1741core::core_arch::hexagon::v64q6_vw_vmpyi_vwrhfunction
1742core::core_arch::hexagon::v64q6_vw_vmpyi_vwrubfunction
1743core::core_arch::hexagon::v64q6_vw_vmpyiacc_vwvwrbfunction
1744core::core_arch::hexagon::v64q6_vw_vmpyiacc_vwvwrhfunction
1745core::core_arch::hexagon::v64q6_vw_vmpyiacc_vwvwrubfunction
1746core::core_arch::hexagon::v64q6_vw_vmpyie_vwvuhfunction
1747core::core_arch::hexagon::v64q6_vw_vmpyieacc_vwvwvhfunction
1748core::core_arch::hexagon::v64q6_vw_vmpyieacc_vwvwvuhfunction
1749core::core_arch::hexagon::v64q6_vw_vmpyieo_vhvhfunction
1750core::core_arch::hexagon::v64q6_vw_vmpyio_vwvhfunction
1751core::core_arch::hexagon::v64q6_vw_vmpyo_vwvh_s1_rnd_satfunction
1752core::core_arch::hexagon::v64q6_vw_vmpyo_vwvh_s1_satfunction
1753core::core_arch::hexagon::v64q6_vw_vmpyoacc_vwvwvh_s1_rnd_sat_shiftfunction
1754core::core_arch::hexagon::v64q6_vw_vmpyoacc_vwvwvh_s1_sat_shiftfunction
1755core::core_arch::hexagon::v64q6_vw_vnavg_vwvwfunction
1756core::core_arch::hexagon::v64q6_vw_vnormamt_vwfunction
1757core::core_arch::hexagon::v64q6_vw_vrmpy_vbvbfunction
1758core::core_arch::hexagon::v64q6_vw_vrmpy_vubrbfunction
1759core::core_arch::hexagon::v64q6_vw_vrmpy_vubvbfunction
1760core::core_arch::hexagon::v64q6_vw_vrmpyacc_vwvbvbfunction
1761core::core_arch::hexagon::v64q6_vw_vrmpyacc_vwvubrbfunction
1762core::core_arch::hexagon::v64q6_vw_vrmpyacc_vwvubvbfunction
1763core::core_arch::hexagon::v64q6_vw_vsatdw_vwvwfunction
1764core::core_arch::hexagon::v64q6_vw_vsub_vwvwfunction
1765core::core_arch::hexagon::v64q6_vw_vsub_vwvw_satfunction
1766core::core_arch::hexagon::v64q6_w_equals_wfunction
1767core::core_arch::hexagon::v64q6_w_vcombine_vvfunction
1768core::core_arch::hexagon::v64q6_w_vdeal_vvrfunction
1769core::core_arch::hexagon::v64q6_w_vmpye_vwvuhfunction
1770core::core_arch::hexagon::v64q6_w_vmpyoacc_wvwvhfunction
1771core::core_arch::hexagon::v64q6_w_vshuff_vvrfunction
1772core::core_arch::hexagon::v64q6_w_vswap_qvvfunction
1773core::core_arch::hexagon::v64q6_w_vzerofunction
1774core::core_arch::hexagon::v64q6_wb_vadd_wbwbfunction
1775core::core_arch::hexagon::v64q6_wb_vadd_wbwb_satfunction
1776core::core_arch::hexagon::v64q6_wb_vshuffoe_vbvbfunction
1777core::core_arch::hexagon::v64q6_wb_vsub_wbwbfunction
1778core::core_arch::hexagon::v64q6_wb_vsub_wbwb_satfunction
1779core::core_arch::hexagon::v64q6_wh_vadd_vubvubfunction
1780core::core_arch::hexagon::v64q6_wh_vadd_whwhfunction
1781core::core_arch::hexagon::v64q6_wh_vadd_whwh_satfunction
1782core::core_arch::hexagon::v64q6_wh_vaddacc_whvubvubfunction
1783core::core_arch::hexagon::v64q6_wh_vdmpy_wubrbfunction
1784core::core_arch::hexagon::v64q6_wh_vdmpyacc_whwubrbfunction
1785core::core_arch::hexagon::v64q6_wh_vlut16_vbvhifunction
1786core::core_arch::hexagon::v64q6_wh_vlut16_vbvhrfunction
1787core::core_arch::hexagon::v64q6_wh_vlut16_vbvhr_nomatchfunction
1788core::core_arch::hexagon::v64q6_wh_vlut16or_whvbvhifunction
1789core::core_arch::hexagon::v64q6_wh_vlut16or_whvbvhrfunction
1790core::core_arch::hexagon::v64q6_wh_vmpa_wubrbfunction
1791core::core_arch::hexagon::v64q6_wh_vmpa_wubrubfunction
1792core::core_arch::hexagon::v64q6_wh_vmpa_wubwbfunction
1793core::core_arch::hexagon::v64q6_wh_vmpa_wubwubfunction
1794core::core_arch::hexagon::v64q6_wh_vmpaacc_whwubrbfunction
1795core::core_arch::hexagon::v64q6_wh_vmpaacc_whwubrubfunction
1796core::core_arch::hexagon::v64q6_wh_vmpy_vbvbfunction
1797core::core_arch::hexagon::v64q6_wh_vmpy_vubrbfunction
1798core::core_arch::hexagon::v64q6_wh_vmpy_vubvbfunction
1799core::core_arch::hexagon::v64q6_wh_vmpyacc_whvbvbfunction
1800core::core_arch::hexagon::v64q6_wh_vmpyacc_whvubrbfunction
1801core::core_arch::hexagon::v64q6_wh_vmpyacc_whvubvbfunction
1802core::core_arch::hexagon::v64q6_wh_vshuffoe_vhvhfunction
1803core::core_arch::hexagon::v64q6_wh_vsub_vubvubfunction
1804core::core_arch::hexagon::v64q6_wh_vsub_whwhfunction
1805core::core_arch::hexagon::v64q6_wh_vsub_whwh_satfunction
1806core::core_arch::hexagon::v64q6_wh_vsxt_vbfunction
1807core::core_arch::hexagon::v64q6_wh_vtmpy_wbrbfunction
1808core::core_arch::hexagon::v64q6_wh_vtmpy_wubrbfunction
1809core::core_arch::hexagon::v64q6_wh_vtmpyacc_whwbrbfunction
1810core::core_arch::hexagon::v64q6_wh_vtmpyacc_whwubrbfunction
1811core::core_arch::hexagon::v64q6_wh_vunpack_vbfunction
1812core::core_arch::hexagon::v64q6_wh_vunpackoor_whvbfunction
1813core::core_arch::hexagon::v64q6_whf_vcvt2_vbfunction
1814core::core_arch::hexagon::v64q6_whf_vcvt2_vubfunction
1815core::core_arch::hexagon::v64q6_whf_vcvt_vfunction
1816core::core_arch::hexagon::v64q6_whf_vcvt_vbfunction
1817core::core_arch::hexagon::v64q6_whf_vcvt_vubfunction
1818core::core_arch::hexagon::v64q6_wqf32_vmpy_vhfvhffunction
1819core::core_arch::hexagon::v64q6_wqf32_vmpy_vqf16vhffunction
1820core::core_arch::hexagon::v64q6_wqf32_vmpy_vqf16vqf16function
1821core::core_arch::hexagon::v64q6_wsf_vadd_vhfvhffunction
1822core::core_arch::hexagon::v64q6_wsf_vcvt_vhffunction
1823core::core_arch::hexagon::v64q6_wsf_vmpy_vhfvhffunction
1824core::core_arch::hexagon::v64q6_wsf_vmpyacc_wsfvhfvhffunction
1825core::core_arch::hexagon::v64q6_wsf_vsub_vhfvhffunction
1826core::core_arch::hexagon::v64q6_wub_vadd_wubwub_satfunction
1827core::core_arch::hexagon::v64q6_wub_vsub_wubwub_satfunction
1828core::core_arch::hexagon::v64q6_wuh_vadd_wuhwuh_satfunction
1829core::core_arch::hexagon::v64q6_wuh_vmpy_vubrubfunction
1830core::core_arch::hexagon::v64q6_wuh_vmpy_vubvubfunction
1831core::core_arch::hexagon::v64q6_wuh_vmpyacc_wuhvubrubfunction
1832core::core_arch::hexagon::v64q6_wuh_vmpyacc_wuhvubvubfunction
1833core::core_arch::hexagon::v64q6_wuh_vsub_wuhwuh_satfunction
1834core::core_arch::hexagon::v64q6_wuh_vunpack_vubfunction
1835core::core_arch::hexagon::v64q6_wuh_vzxt_vubfunction
1836core::core_arch::hexagon::v64q6_wuw_vadd_wuwwuw_satfunction
1837core::core_arch::hexagon::v64q6_wuw_vdsad_wuhruhfunction
1838core::core_arch::hexagon::v64q6_wuw_vdsadacc_wuwwuhruhfunction
1839core::core_arch::hexagon::v64q6_wuw_vmpy_vuhruhfunction
1840core::core_arch::hexagon::v64q6_wuw_vmpy_vuhvuhfunction
1841core::core_arch::hexagon::v64q6_wuw_vmpyacc_wuwvuhruhfunction
1842core::core_arch::hexagon::v64q6_wuw_vmpyacc_wuwvuhvuhfunction
1843core::core_arch::hexagon::v64q6_wuw_vrmpy_wubrubifunction
1844core::core_arch::hexagon::v64q6_wuw_vrmpyacc_wuwwubrubifunction
1845core::core_arch::hexagon::v64q6_wuw_vrsad_wubrubifunction
1846core::core_arch::hexagon::v64q6_wuw_vrsadacc_wuwwubrubifunction
1847core::core_arch::hexagon::v64q6_wuw_vsub_wuwwuw_satfunction
1848core::core_arch::hexagon::v64q6_wuw_vunpack_vuhfunction
1849core::core_arch::hexagon::v64q6_wuw_vzxt_vuhfunction
1850core::core_arch::hexagon::v64q6_ww_v6mpy_wubwbi_hfunction
1851core::core_arch::hexagon::v64q6_ww_v6mpy_wubwbi_vfunction
1852core::core_arch::hexagon::v64q6_ww_v6mpyacc_wwwubwbi_hfunction
1853core::core_arch::hexagon::v64q6_ww_v6mpyacc_wwwubwbi_vfunction
1854core::core_arch::hexagon::v64q6_ww_vadd_vhvhfunction
1855core::core_arch::hexagon::v64q6_ww_vadd_vuhvuhfunction
1856core::core_arch::hexagon::v64q6_ww_vadd_wwwwfunction
1857core::core_arch::hexagon::v64q6_ww_vadd_wwww_satfunction
1858core::core_arch::hexagon::v64q6_ww_vaddacc_wwvhvhfunction
1859core::core_arch::hexagon::v64q6_ww_vaddacc_wwvuhvuhfunction
1860core::core_arch::hexagon::v64q6_ww_vasrinto_wwvwvwfunction
1861core::core_arch::hexagon::v64q6_ww_vdmpy_whrbfunction
1862core::core_arch::hexagon::v64q6_ww_vdmpyacc_wwwhrbfunction
1863core::core_arch::hexagon::v64q6_ww_vmpa_whrbfunction
1864core::core_arch::hexagon::v64q6_ww_vmpa_wuhrbfunction
1865core::core_arch::hexagon::v64q6_ww_vmpaacc_wwwhrbfunction
1866core::core_arch::hexagon::v64q6_ww_vmpaacc_wwwuhrbfunction
1867core::core_arch::hexagon::v64q6_ww_vmpy_vhrhfunction
1868core::core_arch::hexagon::v64q6_ww_vmpy_vhvhfunction
1869core::core_arch::hexagon::v64q6_ww_vmpy_vhvuhfunction
1870core::core_arch::hexagon::v64q6_ww_vmpyacc_wwvhrhfunction
1871core::core_arch::hexagon::v64q6_ww_vmpyacc_wwvhrh_satfunction
1872core::core_arch::hexagon::v64q6_ww_vmpyacc_wwvhvhfunction
1873core::core_arch::hexagon::v64q6_ww_vmpyacc_wwvhvuhfunction
1874core::core_arch::hexagon::v64q6_ww_vrmpy_wubrbifunction
1875core::core_arch::hexagon::v64q6_ww_vrmpyacc_wwwubrbifunction
1876core::core_arch::hexagon::v64q6_ww_vsub_vhvhfunction
1877core::core_arch::hexagon::v64q6_ww_vsub_vuhvuhfunction
1878core::core_arch::hexagon::v64q6_ww_vsub_wwwwfunction
1879core::core_arch::hexagon::v64q6_ww_vsub_wwww_satfunction
1880core::core_arch::hexagon::v64q6_ww_vsxt_vhfunction
1881core::core_arch::hexagon::v64q6_ww_vtmpy_whrbfunction
1882core::core_arch::hexagon::v64q6_ww_vtmpyacc_wwwhrbfunction
1883core::core_arch::hexagon::v64q6_ww_vunpack_vhfunction
1884core::core_arch::hexagon::v64q6_ww_vunpackoor_wwvhfunction
1885core::core_arch::loongarch32cacopfunction
1886core::core_arch::loongarch32csrrdfunction
1887core::core_arch::loongarch32csrwrfunction
1888core::core_arch::loongarch32csrxchgfunction
1889core::core_arch::loongarch64asrtgtfunction
1890core::core_arch::loongarch64asrtlefunction
1891core::core_arch::loongarch64cacopfunction
1892core::core_arch::loongarch64csrrdfunction
1893core::core_arch::loongarch64csrwrfunction
1894core::core_arch::loongarch64csrxchgfunction
1895core::core_arch::loongarch64iocsrrd_dfunction
1896core::core_arch::loongarch64iocsrwr_dfunction
1897core::core_arch::loongarch64lddirfunction
1898core::core_arch::loongarch64ldptefunction
1899core::core_arch::loongarch64::lasx::generatedlasx_xvldfunction
1900core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_bfunction
1901core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_dfunction
1902core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_hfunction
1903core::core_arch::loongarch64::lasx::generatedlasx_xvldrepl_wfunction
1904core::core_arch::loongarch64::lasx::generatedlasx_xvldxfunction
1905core::core_arch::loongarch64::lasx::generatedlasx_xvstfunction
1906core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_bfunction
1907core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_dfunction
1908core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_hfunction
1909core::core_arch::loongarch64::lasx::generatedlasx_xvstelm_wfunction
1910core::core_arch::loongarch64::lasx::generatedlasx_xvstxfunction
1911core::core_arch::loongarch64::lsx::generatedlsx_vldfunction
1912core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_bfunction
1913core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_dfunction
1914core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_hfunction
1915core::core_arch::loongarch64::lsx::generatedlsx_vldrepl_wfunction
1916core::core_arch::loongarch64::lsx::generatedlsx_vldxfunction
1917core::core_arch::loongarch64::lsx::generatedlsx_vstfunction
1918core::core_arch::loongarch64::lsx::generatedlsx_vstelm_bfunction
1919core::core_arch::loongarch64::lsx::generatedlsx_vstelm_dfunction
1920core::core_arch::loongarch64::lsx::generatedlsx_vstelm_hfunction
1921core::core_arch::loongarch64::lsx::generatedlsx_vstelm_wfunction
1922core::core_arch::loongarch64::lsx::generatedlsx_vstxfunction
1923core::core_arch::loongarch_sharedbrkfunction
1924core::core_arch::loongarch_sharediocsrrd_bfunction
1925core::core_arch::loongarch_sharediocsrrd_hfunction
1926core::core_arch::loongarch_sharediocsrrd_wfunction
1927core::core_arch::loongarch_sharediocsrwr_bfunction
1928core::core_arch::loongarch_sharediocsrwr_hfunction
1929core::core_arch::loongarch_sharediocsrwr_wfunction
1930core::core_arch::loongarch_sharedmovgr2fcsrfunction
1931core::core_arch::loongarch_sharedsyscallfunction
1932core::core_arch::mipsbreak_function
1933core::core_arch::nvptx__assert_failfunction
1934core::core_arch::nvptx_block_dim_xfunction
1935core::core_arch::nvptx_block_dim_yfunction
1936core::core_arch::nvptx_block_dim_zfunction
1937core::core_arch::nvptx_block_idx_xfunction
1938core::core_arch::nvptx_block_idx_yfunction
1939core::core_arch::nvptx_block_idx_zfunction
1940core::core_arch::nvptx_grid_dim_xfunction
1941core::core_arch::nvptx_grid_dim_yfunction
1942core::core_arch::nvptx_grid_dim_zfunction
1943core::core_arch::nvptx_syncthreadsfunction
1944core::core_arch::nvptx_thread_idx_xfunction
1945core::core_arch::nvptx_thread_idx_yfunction
1946core::core_arch::nvptx_thread_idx_zfunction
1947core::core_arch::nvptxfreefunction
1948core::core_arch::nvptxmallocfunction
1949core::core_arch::nvptxtrapfunction
1950core::core_arch::nvptxvprintffunction
1951core::core_arch::nvptx::packedf16x2_addfunction
1952core::core_arch::nvptx::packedf16x2_fmafunction
1953core::core_arch::nvptx::packedf16x2_maxfunction
1954core::core_arch::nvptx::packedf16x2_max_nanfunction
1955core::core_arch::nvptx::packedf16x2_minfunction
1956core::core_arch::nvptx::packedf16x2_min_nanfunction
1957core::core_arch::nvptx::packedf16x2_mulfunction
1958core::core_arch::nvptx::packedf16x2_negfunction
1959core::core_arch::nvptx::packedf16x2_subfunction
1960core::core_arch::powerpctrapfunction
1961core::core_arch::powerpc64::vsxvec_xl_lenfunction
1962core::core_arch::powerpc64::vsxvec_xst_lenfunction
1963core::core_arch::powerpc::altivecvec_absfunction
1964core::core_arch::powerpc::altivecvec_abssfunction
1965core::core_arch::powerpc::altivecvec_addfunction
1966core::core_arch::powerpc::altivecvec_addcfunction
1967core::core_arch::powerpc::altivecvec_addefunction
1968core::core_arch::powerpc::altivecvec_addsfunction
1969core::core_arch::powerpc::altivecvec_all_eqfunction
1970core::core_arch::powerpc::altivecvec_all_gefunction
1971core::core_arch::powerpc::altivecvec_all_gtfunction
1972core::core_arch::powerpc::altivecvec_all_infunction
1973core::core_arch::powerpc::altivecvec_all_lefunction
1974core::core_arch::powerpc::altivecvec_all_ltfunction
1975core::core_arch::powerpc::altivecvec_all_nanfunction
1976core::core_arch::powerpc::altivecvec_all_nefunction
1977core::core_arch::powerpc::altivecvec_all_ngefunction
1978core::core_arch::powerpc::altivecvec_all_ngtfunction
1979core::core_arch::powerpc::altivecvec_all_nlefunction
1980core::core_arch::powerpc::altivecvec_all_nltfunction
1981core::core_arch::powerpc::altivecvec_all_numericfunction
1982core::core_arch::powerpc::altivecvec_andfunction
1983core::core_arch::powerpc::altivecvec_andcfunction
1984core::core_arch::powerpc::altivecvec_any_eqfunction
1985core::core_arch::powerpc::altivecvec_any_gefunction
1986core::core_arch::powerpc::altivecvec_any_gtfunction
1987core::core_arch::powerpc::altivecvec_any_lefunction
1988core::core_arch::powerpc::altivecvec_any_ltfunction
1989core::core_arch::powerpc::altivecvec_any_nanfunction
1990core::core_arch::powerpc::altivecvec_any_nefunction
1991core::core_arch::powerpc::altivecvec_any_ngefunction
1992core::core_arch::powerpc::altivecvec_any_ngtfunction
1993core::core_arch::powerpc::altivecvec_any_nlefunction
1994core::core_arch::powerpc::altivecvec_any_nltfunction
1995core::core_arch::powerpc::altivecvec_any_numericfunction
1996core::core_arch::powerpc::altivecvec_any_outfunction
1997core::core_arch::powerpc::altivecvec_avgfunction
1998core::core_arch::powerpc::altivecvec_ceilfunction
1999core::core_arch::powerpc::altivecvec_cmpbfunction
2000core::core_arch::powerpc::altivecvec_cmpeqfunction
2001core::core_arch::powerpc::altivecvec_cmpgefunction
2002core::core_arch::powerpc::altivecvec_cmpgtfunction
2003core::core_arch::powerpc::altivecvec_cmplefunction
2004core::core_arch::powerpc::altivecvec_cmpltfunction
2005core::core_arch::powerpc::altivecvec_cmpnefunction
2006core::core_arch::powerpc::altivecvec_cntlzfunction
2007core::core_arch::powerpc::altivecvec_ctffunction
2008core::core_arch::powerpc::altivecvec_ctsfunction
2009core::core_arch::powerpc::altivecvec_ctufunction
2010core::core_arch::powerpc::altivecvec_exptefunction
2011core::core_arch::powerpc::altivecvec_extractfunction
2012core::core_arch::powerpc::altivecvec_floorfunction
2013core::core_arch::powerpc::altivecvec_insertfunction
2014core::core_arch::powerpc::altivecvec_ldfunction
2015core::core_arch::powerpc::altivecvec_ldefunction
2016core::core_arch::powerpc::altivecvec_ldlfunction
2017core::core_arch::powerpc::altivecvec_logefunction
2018core::core_arch::powerpc::altivecvec_maddfunction
2019core::core_arch::powerpc::altivecvec_maddsfunction
2020core::core_arch::powerpc::altivecvec_maxfunction
2021core::core_arch::powerpc::altivecvec_mergehfunction
2022core::core_arch::powerpc::altivecvec_mergelfunction
2023core::core_arch::powerpc::altivecvec_mfvscrfunction
2024core::core_arch::powerpc::altivecvec_minfunction
2025core::core_arch::powerpc::altivecvec_mladdfunction
2026core::core_arch::powerpc::altivecvec_mraddsfunction
2027core::core_arch::powerpc::altivecvec_msumfunction
2028core::core_arch::powerpc::altivecvec_msumsfunction
2029core::core_arch::powerpc::altivecvec_mulfunction
2030core::core_arch::powerpc::altivecvec_nandfunction
2031core::core_arch::powerpc::altivecvec_negfunction
2032core::core_arch::powerpc::altivecvec_nmsubfunction
2033core::core_arch::powerpc::altivecvec_norfunction
2034core::core_arch::powerpc::altivecvec_orfunction
2035core::core_arch::powerpc::altivecvec_orcfunction
2036core::core_arch::powerpc::altivecvec_packfunction
2037core::core_arch::powerpc::altivecvec_packsfunction
2038core::core_arch::powerpc::altivecvec_packsufunction
2039core::core_arch::powerpc::altivecvec_rlfunction
2040core::core_arch::powerpc::altivecvec_roundfunction
2041core::core_arch::powerpc::altivecvec_selfunction
2042core::core_arch::powerpc::altivecvec_slfunction
2043core::core_arch::powerpc::altivecvec_sldfunction
2044core::core_arch::powerpc::altivecvec_sldwfunction
2045core::core_arch::powerpc::altivecvec_sllfunction
2046core::core_arch::powerpc::altivecvec_slofunction
2047core::core_arch::powerpc::altivecvec_slvfunction
2048core::core_arch::powerpc::altivecvec_splatfunction
2049core::core_arch::powerpc::altivecvec_splat_s16function
2050core::core_arch::powerpc::altivecvec_splat_s32function
2051core::core_arch::powerpc::altivecvec_splat_s8function
2052core::core_arch::powerpc::altivecvec_splat_u16function
2053core::core_arch::powerpc::altivecvec_splat_u32function
2054core::core_arch::powerpc::altivecvec_splat_u8function
2055core::core_arch::powerpc::altivecvec_splatsfunction
2056core::core_arch::powerpc::altivecvec_srfunction
2057core::core_arch::powerpc::altivecvec_srafunction
2058core::core_arch::powerpc::altivecvec_srlfunction
2059core::core_arch::powerpc::altivecvec_srofunction
2060core::core_arch::powerpc::altivecvec_srvfunction
2061core::core_arch::powerpc::altivecvec_stfunction
2062core::core_arch::powerpc::altivecvec_stefunction
2063core::core_arch::powerpc::altivecvec_stlfunction
2064core::core_arch::powerpc::altivecvec_subfunction
2065core::core_arch::powerpc::altivecvec_subcfunction
2066core::core_arch::powerpc::altivecvec_subsfunction
2067core::core_arch::powerpc::altivecvec_sum4sfunction
2068core::core_arch::powerpc::altivecvec_unpackhfunction
2069core::core_arch::powerpc::altivecvec_unpacklfunction
2070core::core_arch::powerpc::altivecvec_xlfunction
2071core::core_arch::powerpc::altivecvec_xorfunction
2072core::core_arch::powerpc::altivecvec_xstfunction
2073core::core_arch::powerpc::altivec::endianvec_mulefunction
2074core::core_arch::powerpc::altivec::endianvec_mulofunction
2075core::core_arch::powerpc::altivec::endianvec_permfunction
2076core::core_arch::powerpc::altivec::endianvec_sum2sfunction
2077core::core_arch::powerpc::vsxvec_mergeefunction
2078core::core_arch::powerpc::vsxvec_mergeofunction
2079core::core_arch::powerpc::vsxvec_xxpermdifunction
2080core::core_arch::riscv64hlv_dfunction
2081core::core_arch::riscv64hlv_wufunction
2082core::core_arch::riscv64hsv_dfunction
2083core::core_arch::riscv_sharedfence_ifunction
2084core::core_arch::riscv_sharedhfence_gvmafunction
2085core::core_arch::riscv_sharedhfence_gvma_allfunction
2086core::core_arch::riscv_sharedhfence_gvma_gaddrfunction
2087core::core_arch::riscv_sharedhfence_gvma_vmidfunction
2088core::core_arch::riscv_sharedhfence_vvmafunction
2089core::core_arch::riscv_sharedhfence_vvma_allfunction
2090core::core_arch::riscv_sharedhfence_vvma_asidfunction
2091core::core_arch::riscv_sharedhfence_vvma_vaddrfunction
2092core::core_arch::riscv_sharedhinval_gvmafunction
2093core::core_arch::riscv_sharedhinval_gvma_allfunction
2094core::core_arch::riscv_sharedhinval_gvma_gaddrfunction
2095core::core_arch::riscv_sharedhinval_gvma_vmidfunction
2096core::core_arch::riscv_sharedhinval_vvmafunction
2097core::core_arch::riscv_sharedhinval_vvma_allfunction
2098core::core_arch::riscv_sharedhinval_vvma_asidfunction
2099core::core_arch::riscv_sharedhinval_vvma_vaddrfunction
2100core::core_arch::riscv_sharedhlv_bfunction
2101core::core_arch::riscv_sharedhlv_bufunction
2102core::core_arch::riscv_sharedhlv_hfunction
2103core::core_arch::riscv_sharedhlv_hufunction
2104core::core_arch::riscv_sharedhlv_wfunction
2105core::core_arch::riscv_sharedhlvx_hufunction
2106core::core_arch::riscv_sharedhlvx_wufunction
2107core::core_arch::riscv_sharedhsv_bfunction
2108core::core_arch::riscv_sharedhsv_hfunction
2109core::core_arch::riscv_sharedhsv_wfunction
2110core::core_arch::riscv_sharedsfence_inval_irfunction
2111core::core_arch::riscv_sharedsfence_vmafunction
2112core::core_arch::riscv_sharedsfence_vma_allfunction
2113core::core_arch::riscv_sharedsfence_vma_asidfunction
2114core::core_arch::riscv_sharedsfence_vma_vaddrfunction
2115core::core_arch::riscv_sharedsfence_w_invalfunction
2116core::core_arch::riscv_sharedsinval_vmafunction
2117core::core_arch::riscv_sharedsinval_vma_allfunction
2118core::core_arch::riscv_sharedsinval_vma_asidfunction
2119core::core_arch::riscv_sharedsinval_vma_vaddrfunction
2120core::core_arch::riscv_sharedwfifunction
2121core::core_arch::s390x::vectorvec_absfunction
2122core::core_arch::s390x::vectorvec_addfunction
2123core::core_arch::s390x::vectorvec_add_u128function
2124core::core_arch::s390x::vectorvec_addc_u128function
2125core::core_arch::s390x::vectorvec_adde_u128function
2126core::core_arch::s390x::vectorvec_addec_u128function
2127core::core_arch::s390x::vectorvec_all_eqfunction
2128core::core_arch::s390x::vectorvec_all_gefunction
2129core::core_arch::s390x::vectorvec_all_gtfunction
2130core::core_arch::s390x::vectorvec_all_lefunction
2131core::core_arch::s390x::vectorvec_all_ltfunction
2132core::core_arch::s390x::vectorvec_all_nanfunction
2133core::core_arch::s390x::vectorvec_all_nefunction
2134core::core_arch::s390x::vectorvec_all_ngefunction
2135core::core_arch::s390x::vectorvec_all_ngtfunction
2136core::core_arch::s390x::vectorvec_all_nlefunction
2137core::core_arch::s390x::vectorvec_all_nltfunction
2138core::core_arch::s390x::vectorvec_all_numericfunction
2139core::core_arch::s390x::vectorvec_andfunction
2140core::core_arch::s390x::vectorvec_andcfunction
2141core::core_arch::s390x::vectorvec_any_eqfunction
2142core::core_arch::s390x::vectorvec_any_gefunction
2143core::core_arch::s390x::vectorvec_any_gtfunction
2144core::core_arch::s390x::vectorvec_any_lefunction
2145core::core_arch::s390x::vectorvec_any_ltfunction
2146core::core_arch::s390x::vectorvec_any_nanfunction
2147core::core_arch::s390x::vectorvec_any_nefunction
2148core::core_arch::s390x::vectorvec_any_ngefunction
2149core::core_arch::s390x::vectorvec_any_ngtfunction
2150core::core_arch::s390x::vectorvec_any_nlefunction
2151core::core_arch::s390x::vectorvec_any_nltfunction
2152core::core_arch::s390x::vectorvec_any_numericfunction
2153core::core_arch::s390x::vectorvec_avgfunction
2154core::core_arch::s390x::vectorvec_bperm_u128function
2155core::core_arch::s390x::vectorvec_ceilfunction
2156core::core_arch::s390x::vectorvec_checksumfunction
2157core::core_arch::s390x::vectorvec_cmpeqfunction
2158core::core_arch::s390x::vectorvec_cmpeq_idxfunction
2159core::core_arch::s390x::vectorvec_cmpeq_idx_ccfunction
2160core::core_arch::s390x::vectorvec_cmpeq_or_0_idxfunction
2161core::core_arch::s390x::vectorvec_cmpeq_or_0_idx_ccfunction
2162core::core_arch::s390x::vectorvec_cmpgefunction
2163core::core_arch::s390x::vectorvec_cmpgtfunction
2164core::core_arch::s390x::vectorvec_cmplefunction
2165core::core_arch::s390x::vectorvec_cmpltfunction
2166core::core_arch::s390x::vectorvec_cmpnefunction
2167core::core_arch::s390x::vectorvec_cmpne_idxfunction
2168core::core_arch::s390x::vectorvec_cmpne_idx_ccfunction
2169core::core_arch::s390x::vectorvec_cmpne_or_0_idxfunction
2170core::core_arch::s390x::vectorvec_cmpne_or_0_idx_ccfunction
2171core::core_arch::s390x::vectorvec_cmpnrgfunction
2172core::core_arch::s390x::vectorvec_cmpnrg_ccfunction
2173core::core_arch::s390x::vectorvec_cmpnrg_idxfunction
2174core::core_arch::s390x::vectorvec_cmpnrg_idx_ccfunction
2175core::core_arch::s390x::vectorvec_cmpnrg_or_0_idxfunction
2176core::core_arch::s390x::vectorvec_cmpnrg_or_0_idx_ccfunction
2177core::core_arch::s390x::vectorvec_cmprgfunction
2178core::core_arch::s390x::vectorvec_cmprg_ccfunction
2179core::core_arch::s390x::vectorvec_cmprg_idxfunction
2180core::core_arch::s390x::vectorvec_cmprg_idx_ccfunction
2181core::core_arch::s390x::vectorvec_cmprg_or_0_idxfunction
2182core::core_arch::s390x::vectorvec_cmprg_or_0_idx_ccfunction
2183core::core_arch::s390x::vectorvec_cntlzfunction
2184core::core_arch::s390x::vectorvec_cnttzfunction
2185core::core_arch::s390x::vectorvec_convert_from_fp16function
2186core::core_arch::s390x::vectorvec_convert_to_fp16function
2187core::core_arch::s390x::vectorvec_cp_until_zerofunction
2188core::core_arch::s390x::vectorvec_cp_until_zero_ccfunction
2189core::core_arch::s390x::vectorvec_doublefunction
2190core::core_arch::s390x::vectorvec_doubleefunction
2191core::core_arch::s390x::vectorvec_eqvfunction
2192core::core_arch::s390x::vectorvec_extend_s64function
2193core::core_arch::s390x::vectorvec_extend_to_fp32_hifunction
2194core::core_arch::s390x::vectorvec_extend_to_fp32_lofunction
2195core::core_arch::s390x::vectorvec_extractfunction
2196core::core_arch::s390x::vectorvec_find_any_eqfunction
2197core::core_arch::s390x::vectorvec_find_any_eq_ccfunction
2198core::core_arch::s390x::vectorvec_find_any_eq_idxfunction
2199core::core_arch::s390x::vectorvec_find_any_eq_idx_ccfunction
2200core::core_arch::s390x::vectorvec_find_any_eq_or_0_idxfunction
2201core::core_arch::s390x::vectorvec_find_any_eq_or_0_idx_ccfunction
2202core::core_arch::s390x::vectorvec_find_any_nefunction
2203core::core_arch::s390x::vectorvec_find_any_ne_ccfunction
2204core::core_arch::s390x::vectorvec_find_any_ne_idxfunction
2205core::core_arch::s390x::vectorvec_find_any_ne_idx_ccfunction
2206core::core_arch::s390x::vectorvec_find_any_ne_or_0_idxfunction
2207core::core_arch::s390x::vectorvec_find_any_ne_or_0_idx_ccfunction
2208core::core_arch::s390x::vectorvec_floatfunction
2209core::core_arch::s390x::vectorvec_floatefunction
2210core::core_arch::s390x::vectorvec_floorfunction
2211core::core_arch::s390x::vectorvec_fp_test_data_classfunction
2212core::core_arch::s390x::vectorvec_gather_elementfunction
2213core::core_arch::s390x::vectorvec_genmaskfunction
2214core::core_arch::s390x::vectorvec_genmasks_16function
2215core::core_arch::s390x::vectorvec_genmasks_32function
2216core::core_arch::s390x::vectorvec_genmasks_64function
2217core::core_arch::s390x::vectorvec_genmasks_8function
2218core::core_arch::s390x::vectorvec_gfmsumfunction
2219core::core_arch::s390x::vectorvec_gfmsum_128function
2220core::core_arch::s390x::vectorvec_gfmsum_accumfunction
2221core::core_arch::s390x::vectorvec_gfmsum_accum_128function
2222core::core_arch::s390x::vectorvec_insertfunction
2223core::core_arch::s390x::vectorvec_insert_and_zerofunction
2224core::core_arch::s390x::vectorvec_load_bndryfunction
2225core::core_arch::s390x::vectorvec_load_lenfunction
2226core::core_arch::s390x::vectorvec_load_len_rfunction
2227core::core_arch::s390x::vectorvec_load_pairfunction
2228core::core_arch::s390x::vectorvec_maddfunction
2229core::core_arch::s390x::vectorvec_maxfunction
2230core::core_arch::s390x::vectorvec_meaddfunction
2231core::core_arch::s390x::vectorvec_mergehfunction
2232core::core_arch::s390x::vectorvec_mergelfunction
2233core::core_arch::s390x::vectorvec_mhaddfunction
2234core::core_arch::s390x::vectorvec_minfunction
2235core::core_arch::s390x::vectorvec_mladdfunction
2236core::core_arch::s390x::vectorvec_moaddfunction
2237core::core_arch::s390x::vectorvec_msubfunction
2238core::core_arch::s390x::vectorvec_msum_u128function
2239core::core_arch::s390x::vectorvec_mulfunction
2240core::core_arch::s390x::vectorvec_mulefunction
2241core::core_arch::s390x::vectorvec_mulhfunction
2242core::core_arch::s390x::vectorvec_mulofunction
2243core::core_arch::s390x::vectorvec_nabsfunction
2244core::core_arch::s390x::vectorvec_nandfunction
2245core::core_arch::s390x::vectorvec_negfunction
2246core::core_arch::s390x::vectorvec_nmaddfunction
2247core::core_arch::s390x::vectorvec_nmsubfunction
2248core::core_arch::s390x::vectorvec_norfunction
2249core::core_arch::s390x::vectorvec_orfunction
2250core::core_arch::s390x::vectorvec_orcfunction
2251core::core_arch::s390x::vectorvec_packfunction
2252core::core_arch::s390x::vectorvec_packsfunction
2253core::core_arch::s390x::vectorvec_packs_ccfunction
2254core::core_arch::s390x::vectorvec_packsufunction
2255core::core_arch::s390x::vectorvec_packsu_ccfunction
2256core::core_arch::s390x::vectorvec_permfunction
2257core::core_arch::s390x::vectorvec_popcntfunction
2258core::core_arch::s390x::vectorvec_promotefunction
2259core::core_arch::s390x::vectorvec_revbfunction
2260core::core_arch::s390x::vectorvec_revefunction
2261core::core_arch::s390x::vectorvec_rintfunction
2262core::core_arch::s390x::vectorvec_rlfunction
2263core::core_arch::s390x::vectorvec_rlifunction
2264core::core_arch::s390x::vectorvec_roundfunction
2265core::core_arch::s390x::vectorvec_round_from_fp32function
2266core::core_arch::s390x::vectorvec_roundcfunction
2267core::core_arch::s390x::vectorvec_roundmfunction
2268core::core_arch::s390x::vectorvec_roundpfunction
2269core::core_arch::s390x::vectorvec_roundzfunction
2270core::core_arch::s390x::vectorvec_search_string_ccfunction
2271core::core_arch::s390x::vectorvec_search_string_until_zero_ccfunction
2272core::core_arch::s390x::vectorvec_selfunction
2273core::core_arch::s390x::vectorvec_signedfunction
2274core::core_arch::s390x::vectorvec_slfunction
2275core::core_arch::s390x::vectorvec_slbfunction
2276core::core_arch::s390x::vectorvec_sldfunction
2277core::core_arch::s390x::vectorvec_sldbfunction
2278core::core_arch::s390x::vectorvec_sldwfunction
2279core::core_arch::s390x::vectorvec_sllfunction
2280core::core_arch::s390x::vectorvec_splatfunction
2281core::core_arch::s390x::vectorvec_splat_s16function
2282core::core_arch::s390x::vectorvec_splat_s32function
2283core::core_arch::s390x::vectorvec_splat_s64function
2284core::core_arch::s390x::vectorvec_splat_s8function
2285core::core_arch::s390x::vectorvec_splat_u16function
2286core::core_arch::s390x::vectorvec_splat_u32function
2287core::core_arch::s390x::vectorvec_splat_u64function
2288core::core_arch::s390x::vectorvec_splat_u8function
2289core::core_arch::s390x::vectorvec_splatsfunction
2290core::core_arch::s390x::vectorvec_sqrtfunction
2291core::core_arch::s390x::vectorvec_srfunction
2292core::core_arch::s390x::vectorvec_srafunction
2293core::core_arch::s390x::vectorvec_srabfunction
2294core::core_arch::s390x::vectorvec_sralfunction
2295core::core_arch::s390x::vectorvec_srbfunction
2296core::core_arch::s390x::vectorvec_srdbfunction
2297core::core_arch::s390x::vectorvec_srlfunction
2298core::core_arch::s390x::vectorvec_store_lenfunction
2299core::core_arch::s390x::vectorvec_store_len_rfunction
2300core::core_arch::s390x::vectorvec_subfunction
2301core::core_arch::s390x::vectorvec_sub_u128function
2302core::core_arch::s390x::vectorvec_subcfunction
2303core::core_arch::s390x::vectorvec_subc_u128function
2304core::core_arch::s390x::vectorvec_sube_u128function
2305core::core_arch::s390x::vectorvec_subec_u128function
2306core::core_arch::s390x::vectorvec_sum2function
2307core::core_arch::s390x::vectorvec_sum4function
2308core::core_arch::s390x::vectorvec_sum_u128function
2309core::core_arch::s390x::vectorvec_test_maskfunction
2310core::core_arch::s390x::vectorvec_truncfunction
2311core::core_arch::s390x::vectorvec_unpackhfunction
2312core::core_arch::s390x::vectorvec_unpacklfunction
2313core::core_arch::s390x::vectorvec_unsignedfunction
2314core::core_arch::s390x::vectorvec_xlfunction
2315core::core_arch::s390x::vectorvec_xorfunction
2316core::core_arch::s390x::vectorvec_xstfunction
2317core::core_arch::wasm32::atomicmemory_atomic_notifyfunction
2318core::core_arch::wasm32::atomicmemory_atomic_wait32function
2319core::core_arch::wasm32::atomicmemory_atomic_wait64function
2320core::core_arch::wasm32::simd128i16x8_load_extend_i8x8function
2321core::core_arch::wasm32::simd128i16x8_load_extend_u8x8function
2322core::core_arch::wasm32::simd128i32x4_load_extend_i16x4function
2323core::core_arch::wasm32::simd128i32x4_load_extend_u16x4function
2324core::core_arch::wasm32::simd128i64x2_load_extend_i32x2function
2325core::core_arch::wasm32::simd128i64x2_load_extend_u32x2function
2326core::core_arch::wasm32::simd128v128_loadfunction
2327core::core_arch::wasm32::simd128v128_load16_lanefunction
2328core::core_arch::wasm32::simd128v128_load16_splatfunction
2329core::core_arch::wasm32::simd128v128_load32_lanefunction
2330core::core_arch::wasm32::simd128v128_load32_splatfunction
2331core::core_arch::wasm32::simd128v128_load32_zerofunction
2332core::core_arch::wasm32::simd128v128_load64_lanefunction
2333core::core_arch::wasm32::simd128v128_load64_splatfunction
2334core::core_arch::wasm32::simd128v128_load64_zerofunction
2335core::core_arch::wasm32::simd128v128_load8_lanefunction
2336core::core_arch::wasm32::simd128v128_load8_splatfunction
2337core::core_arch::wasm32::simd128v128_storefunction
2338core::core_arch::wasm32::simd128v128_store16_lanefunction
2339core::core_arch::wasm32::simd128v128_store32_lanefunction
2340core::core_arch::wasm32::simd128v128_store64_lanefunction
2341core::core_arch::wasm32::simd128v128_store8_lanefunction
2342core::core_arch::x86::avx_mm256_lddqu_si256function
2343core::core_arch::x86::avx_mm256_load_pdfunction
2344core::core_arch::x86::avx_mm256_load_psfunction
2345core::core_arch::x86::avx_mm256_load_si256function
2346core::core_arch::x86::avx_mm256_loadu2_m128function
2347core::core_arch::x86::avx_mm256_loadu2_m128dfunction
2348core::core_arch::x86::avx_mm256_loadu2_m128ifunction
2349core::core_arch::x86::avx_mm256_loadu_pdfunction
2350core::core_arch::x86::avx_mm256_loadu_psfunction
2351core::core_arch::x86::avx_mm256_loadu_si256function
2352core::core_arch::x86::avx_mm256_maskload_pdfunction
2353core::core_arch::x86::avx_mm256_maskload_psfunction
2354core::core_arch::x86::avx_mm256_maskstore_pdfunction
2355core::core_arch::x86::avx_mm256_maskstore_psfunction
2356core::core_arch::x86::avx_mm256_store_pdfunction
2357core::core_arch::x86::avx_mm256_store_psfunction
2358core::core_arch::x86::avx_mm256_store_si256function
2359core::core_arch::x86::avx_mm256_storeu2_m128function
2360core::core_arch::x86::avx_mm256_storeu2_m128dfunction
2361core::core_arch::x86::avx_mm256_storeu2_m128ifunction
2362core::core_arch::x86::avx_mm256_storeu_pdfunction
2363core::core_arch::x86::avx_mm256_storeu_psfunction
2364core::core_arch::x86::avx_mm256_storeu_si256function
2365core::core_arch::x86::avx_mm256_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2366core::core_arch::x86::avx_mm256_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2367core::core_arch::x86::avx_mm256_stream_si256functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2368core::core_arch::x86::avx_mm_maskload_pdfunction
2369core::core_arch::x86::avx_mm_maskload_psfunction
2370core::core_arch::x86::avx_mm_maskstore_pdfunction
2371core::core_arch::x86::avx_mm_maskstore_psfunction
2372core::core_arch::x86::avx2_mm256_i32gather_epi32function
2373core::core_arch::x86::avx2_mm256_i32gather_epi64function
2374core::core_arch::x86::avx2_mm256_i32gather_pdfunction
2375core::core_arch::x86::avx2_mm256_i32gather_psfunction
2376core::core_arch::x86::avx2_mm256_i64gather_epi32function
2377core::core_arch::x86::avx2_mm256_i64gather_epi64function
2378core::core_arch::x86::avx2_mm256_i64gather_pdfunction
2379core::core_arch::x86::avx2_mm256_i64gather_psfunction
2380core::core_arch::x86::avx2_mm256_mask_i32gather_epi32function
2381core::core_arch::x86::avx2_mm256_mask_i32gather_epi64function
2382core::core_arch::x86::avx2_mm256_mask_i32gather_pdfunction
2383core::core_arch::x86::avx2_mm256_mask_i32gather_psfunction
2384core::core_arch::x86::avx2_mm256_mask_i64gather_epi32function
2385core::core_arch::x86::avx2_mm256_mask_i64gather_epi64function
2386core::core_arch::x86::avx2_mm256_mask_i64gather_pdfunction
2387core::core_arch::x86::avx2_mm256_mask_i64gather_psfunction
2388core::core_arch::x86::avx2_mm256_maskload_epi32function
2389core::core_arch::x86::avx2_mm256_maskload_epi64function
2390core::core_arch::x86::avx2_mm256_maskstore_epi32function
2391core::core_arch::x86::avx2_mm256_maskstore_epi64function
2392core::core_arch::x86::avx2_mm256_stream_load_si256function
2393core::core_arch::x86::avx2_mm_i32gather_epi32function
2394core::core_arch::x86::avx2_mm_i32gather_epi64function
2395core::core_arch::x86::avx2_mm_i32gather_pdfunction
2396core::core_arch::x86::avx2_mm_i32gather_psfunction
2397core::core_arch::x86::avx2_mm_i64gather_epi32function
2398core::core_arch::x86::avx2_mm_i64gather_epi64function
2399core::core_arch::x86::avx2_mm_i64gather_pdfunction
2400core::core_arch::x86::avx2_mm_i64gather_psfunction
2401core::core_arch::x86::avx2_mm_mask_i32gather_epi32function
2402core::core_arch::x86::avx2_mm_mask_i32gather_epi64function
2403core::core_arch::x86::avx2_mm_mask_i32gather_pdfunction
2404core::core_arch::x86::avx2_mm_mask_i32gather_psfunction
2405core::core_arch::x86::avx2_mm_mask_i64gather_epi32function
2406core::core_arch::x86::avx2_mm_mask_i64gather_epi64function
2407core::core_arch::x86::avx2_mm_mask_i64gather_pdfunction
2408core::core_arch::x86::avx2_mm_mask_i64gather_psfunction
2409core::core_arch::x86::avx2_mm_maskload_epi32function
2410core::core_arch::x86::avx2_mm_maskload_epi64function
2411core::core_arch::x86::avx2_mm_maskstore_epi32function
2412core::core_arch::x86::avx2_mm_maskstore_epi64function
2413core::core_arch::x86::avx512bw_kortest_mask32_u8function
2414core::core_arch::x86::avx512bw_kortest_mask64_u8function
2415core::core_arch::x86::avx512bw_ktest_mask32_u8function
2416core::core_arch::x86::avx512bw_ktest_mask64_u8function
2417core::core_arch::x86::avx512bw_load_mask32function
2418core::core_arch::x86::avx512bw_load_mask64function
2419core::core_arch::x86::avx512bw_mm256_loadu_epi16function
2420core::core_arch::x86::avx512bw_mm256_loadu_epi8function
2421core::core_arch::x86::avx512bw_mm256_mask_cvtepi16_storeu_epi8function
2422core::core_arch::x86::avx512bw_mm256_mask_cvtsepi16_storeu_epi8function
2423core::core_arch::x86::avx512bw_mm256_mask_cvtusepi16_storeu_epi8function
2424core::core_arch::x86::avx512bw_mm256_mask_loadu_epi16function
2425core::core_arch::x86::avx512bw_mm256_mask_loadu_epi8function
2426core::core_arch::x86::avx512bw_mm256_mask_storeu_epi16function
2427core::core_arch::x86::avx512bw_mm256_mask_storeu_epi8function
2428core::core_arch::x86::avx512bw_mm256_maskz_loadu_epi16function
2429core::core_arch::x86::avx512bw_mm256_maskz_loadu_epi8function
2430core::core_arch::x86::avx512bw_mm256_storeu_epi16function
2431core::core_arch::x86::avx512bw_mm256_storeu_epi8function
2432core::core_arch::x86::avx512bw_mm512_loadu_epi16function
2433core::core_arch::x86::avx512bw_mm512_loadu_epi8function
2434core::core_arch::x86::avx512bw_mm512_mask_cvtepi16_storeu_epi8function
2435core::core_arch::x86::avx512bw_mm512_mask_cvtsepi16_storeu_epi8function
2436core::core_arch::x86::avx512bw_mm512_mask_cvtusepi16_storeu_epi8function
2437core::core_arch::x86::avx512bw_mm512_mask_loadu_epi16function
2438core::core_arch::x86::avx512bw_mm512_mask_loadu_epi8function
2439core::core_arch::x86::avx512bw_mm512_mask_storeu_epi16function
2440core::core_arch::x86::avx512bw_mm512_mask_storeu_epi8function
2441core::core_arch::x86::avx512bw_mm512_maskz_loadu_epi16function
2442core::core_arch::x86::avx512bw_mm512_maskz_loadu_epi8function
2443core::core_arch::x86::avx512bw_mm512_storeu_epi16function
2444core::core_arch::x86::avx512bw_mm512_storeu_epi8function
2445core::core_arch::x86::avx512bw_mm_loadu_epi16function
2446core::core_arch::x86::avx512bw_mm_loadu_epi8function
2447core::core_arch::x86::avx512bw_mm_mask_cvtepi16_storeu_epi8function
2448core::core_arch::x86::avx512bw_mm_mask_cvtsepi16_storeu_epi8function
2449core::core_arch::x86::avx512bw_mm_mask_cvtusepi16_storeu_epi8function
2450core::core_arch::x86::avx512bw_mm_mask_loadu_epi16function
2451core::core_arch::x86::avx512bw_mm_mask_loadu_epi8function
2452core::core_arch::x86::avx512bw_mm_mask_storeu_epi16function
2453core::core_arch::x86::avx512bw_mm_mask_storeu_epi8function
2454core::core_arch::x86::avx512bw_mm_maskz_loadu_epi16function
2455core::core_arch::x86::avx512bw_mm_maskz_loadu_epi8function
2456core::core_arch::x86::avx512bw_mm_storeu_epi16function
2457core::core_arch::x86::avx512bw_mm_storeu_epi8function
2458core::core_arch::x86::avx512bw_store_mask32function
2459core::core_arch::x86::avx512bw_store_mask64function
2460core::core_arch::x86::avx512dq_kortest_mask8_u8function
2461core::core_arch::x86::avx512dq_ktest_mask16_u8function
2462core::core_arch::x86::avx512dq_ktest_mask8_u8function
2463core::core_arch::x86::avx512dq_load_mask8function
2464core::core_arch::x86::avx512dq_store_mask8function
2465core::core_arch::x86::avx512f_kortest_mask16_u8function
2466core::core_arch::x86::avx512f_load_mask16function
2467core::core_arch::x86::avx512f_mm256_i32scatter_epi32function
2468core::core_arch::x86::avx512f_mm256_i32scatter_epi64function
2469core::core_arch::x86::avx512f_mm256_i32scatter_pdfunction
2470core::core_arch::x86::avx512f_mm256_i32scatter_psfunction
2471core::core_arch::x86::avx512f_mm256_i64scatter_epi32function
2472core::core_arch::x86::avx512f_mm256_i64scatter_epi64function
2473core::core_arch::x86::avx512f_mm256_i64scatter_pdfunction
2474core::core_arch::x86::avx512f_mm256_i64scatter_psfunction
2475core::core_arch::x86::avx512f_mm256_load_epi32function
2476core::core_arch::x86::avx512f_mm256_load_epi64function
2477core::core_arch::x86::avx512f_mm256_loadu_epi32function
2478core::core_arch::x86::avx512f_mm256_loadu_epi64function
2479core::core_arch::x86::avx512f_mm256_mask_compressstoreu_epi32function
2480core::core_arch::x86::avx512f_mm256_mask_compressstoreu_epi64function
2481core::core_arch::x86::avx512f_mm256_mask_compressstoreu_pdfunction
2482core::core_arch::x86::avx512f_mm256_mask_compressstoreu_psfunction
2483core::core_arch::x86::avx512f_mm256_mask_cvtepi32_storeu_epi16function
2484core::core_arch::x86::avx512f_mm256_mask_cvtepi32_storeu_epi8function
2485core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi16function
2486core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi32function
2487core::core_arch::x86::avx512f_mm256_mask_cvtepi64_storeu_epi8function
2488core::core_arch::x86::avx512f_mm256_mask_cvtsepi32_storeu_epi16function
2489core::core_arch::x86::avx512f_mm256_mask_cvtsepi32_storeu_epi8function
2490core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi16function
2491core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi32function
2492core::core_arch::x86::avx512f_mm256_mask_cvtsepi64_storeu_epi8function
2493core::core_arch::x86::avx512f_mm256_mask_cvtusepi32_storeu_epi16function
2494core::core_arch::x86::avx512f_mm256_mask_cvtusepi32_storeu_epi8function
2495core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi16function
2496core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi32function
2497core::core_arch::x86::avx512f_mm256_mask_cvtusepi64_storeu_epi8function
2498core::core_arch::x86::avx512f_mm256_mask_expandloadu_epi32function
2499core::core_arch::x86::avx512f_mm256_mask_expandloadu_epi64function
2500core::core_arch::x86::avx512f_mm256_mask_expandloadu_pdfunction
2501core::core_arch::x86::avx512f_mm256_mask_expandloadu_psfunction
2502core::core_arch::x86::avx512f_mm256_mask_i32scatter_epi32function
2503core::core_arch::x86::avx512f_mm256_mask_i32scatter_epi64function
2504core::core_arch::x86::avx512f_mm256_mask_i32scatter_pdfunction
2505core::core_arch::x86::avx512f_mm256_mask_i32scatter_psfunction
2506core::core_arch::x86::avx512f_mm256_mask_i64scatter_epi32function
2507core::core_arch::x86::avx512f_mm256_mask_i64scatter_epi64function
2508core::core_arch::x86::avx512f_mm256_mask_i64scatter_pdfunction
2509core::core_arch::x86::avx512f_mm256_mask_i64scatter_psfunction
2510core::core_arch::x86::avx512f_mm256_mask_load_epi32function
2511core::core_arch::x86::avx512f_mm256_mask_load_epi64function
2512core::core_arch::x86::avx512f_mm256_mask_load_pdfunction
2513core::core_arch::x86::avx512f_mm256_mask_load_psfunction
2514core::core_arch::x86::avx512f_mm256_mask_loadu_epi32function
2515core::core_arch::x86::avx512f_mm256_mask_loadu_epi64function
2516core::core_arch::x86::avx512f_mm256_mask_loadu_pdfunction
2517core::core_arch::x86::avx512f_mm256_mask_loadu_psfunction
2518core::core_arch::x86::avx512f_mm256_mask_store_epi32function
2519core::core_arch::x86::avx512f_mm256_mask_store_epi64function
2520core::core_arch::x86::avx512f_mm256_mask_store_pdfunction
2521core::core_arch::x86::avx512f_mm256_mask_store_psfunction
2522core::core_arch::x86::avx512f_mm256_mask_storeu_epi32function
2523core::core_arch::x86::avx512f_mm256_mask_storeu_epi64function
2524core::core_arch::x86::avx512f_mm256_mask_storeu_pdfunction
2525core::core_arch::x86::avx512f_mm256_mask_storeu_psfunction
2526core::core_arch::x86::avx512f_mm256_maskz_expandloadu_epi32function
2527core::core_arch::x86::avx512f_mm256_maskz_expandloadu_epi64function
2528core::core_arch::x86::avx512f_mm256_maskz_expandloadu_pdfunction
2529core::core_arch::x86::avx512f_mm256_maskz_expandloadu_psfunction
2530core::core_arch::x86::avx512f_mm256_maskz_load_epi32function
2531core::core_arch::x86::avx512f_mm256_maskz_load_epi64function
2532core::core_arch::x86::avx512f_mm256_maskz_load_pdfunction
2533core::core_arch::x86::avx512f_mm256_maskz_load_psfunction
2534core::core_arch::x86::avx512f_mm256_maskz_loadu_epi32function
2535core::core_arch::x86::avx512f_mm256_maskz_loadu_epi64function
2536core::core_arch::x86::avx512f_mm256_maskz_loadu_pdfunction
2537core::core_arch::x86::avx512f_mm256_maskz_loadu_psfunction
2538core::core_arch::x86::avx512f_mm256_mmask_i32gather_epi32function
2539core::core_arch::x86::avx512f_mm256_mmask_i32gather_epi64function
2540core::core_arch::x86::avx512f_mm256_mmask_i32gather_pdfunction
2541core::core_arch::x86::avx512f_mm256_mmask_i32gather_psfunction
2542core::core_arch::x86::avx512f_mm256_mmask_i64gather_epi32function
2543core::core_arch::x86::avx512f_mm256_mmask_i64gather_epi64function
2544core::core_arch::x86::avx512f_mm256_mmask_i64gather_pdfunction
2545core::core_arch::x86::avx512f_mm256_mmask_i64gather_psfunction
2546core::core_arch::x86::avx512f_mm256_store_epi32function
2547core::core_arch::x86::avx512f_mm256_store_epi64function
2548core::core_arch::x86::avx512f_mm256_storeu_epi32function
2549core::core_arch::x86::avx512f_mm256_storeu_epi64function
2550core::core_arch::x86::avx512f_mm512_i32gather_epi32function
2551core::core_arch::x86::avx512f_mm512_i32gather_epi64function
2552core::core_arch::x86::avx512f_mm512_i32gather_pdfunction
2553core::core_arch::x86::avx512f_mm512_i32gather_psfunction
2554core::core_arch::x86::avx512f_mm512_i32logather_epi64function
2555core::core_arch::x86::avx512f_mm512_i32logather_pdfunction
2556core::core_arch::x86::avx512f_mm512_i32loscatter_epi64function
2557core::core_arch::x86::avx512f_mm512_i32loscatter_pdfunction
2558core::core_arch::x86::avx512f_mm512_i32scatter_epi32function
2559core::core_arch::x86::avx512f_mm512_i32scatter_epi64function
2560core::core_arch::x86::avx512f_mm512_i32scatter_pdfunction
2561core::core_arch::x86::avx512f_mm512_i32scatter_psfunction
2562core::core_arch::x86::avx512f_mm512_i64gather_epi32function
2563core::core_arch::x86::avx512f_mm512_i64gather_epi64function
2564core::core_arch::x86::avx512f_mm512_i64gather_pdfunction
2565core::core_arch::x86::avx512f_mm512_i64gather_psfunction
2566core::core_arch::x86::avx512f_mm512_i64scatter_epi32function
2567core::core_arch::x86::avx512f_mm512_i64scatter_epi64function
2568core::core_arch::x86::avx512f_mm512_i64scatter_pdfunction
2569core::core_arch::x86::avx512f_mm512_i64scatter_psfunction
2570core::core_arch::x86::avx512f_mm512_load_epi32function
2571core::core_arch::x86::avx512f_mm512_load_epi64function
2572core::core_arch::x86::avx512f_mm512_load_pdfunction
2573core::core_arch::x86::avx512f_mm512_load_psfunction
2574core::core_arch::x86::avx512f_mm512_load_si512function
2575core::core_arch::x86::avx512f_mm512_loadu_epi32function
2576core::core_arch::x86::avx512f_mm512_loadu_epi64function
2577core::core_arch::x86::avx512f_mm512_loadu_pdfunction
2578core::core_arch::x86::avx512f_mm512_loadu_psfunction
2579core::core_arch::x86::avx512f_mm512_loadu_si512function
2580core::core_arch::x86::avx512f_mm512_mask_compressstoreu_epi32function
2581core::core_arch::x86::avx512f_mm512_mask_compressstoreu_epi64function
2582core::core_arch::x86::avx512f_mm512_mask_compressstoreu_pdfunction
2583core::core_arch::x86::avx512f_mm512_mask_compressstoreu_psfunction
2584core::core_arch::x86::avx512f_mm512_mask_cvtepi32_storeu_epi16function
2585core::core_arch::x86::avx512f_mm512_mask_cvtepi32_storeu_epi8function
2586core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi16function
2587core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi32function
2588core::core_arch::x86::avx512f_mm512_mask_cvtepi64_storeu_epi8function
2589core::core_arch::x86::avx512f_mm512_mask_cvtsepi32_storeu_epi16function
2590core::core_arch::x86::avx512f_mm512_mask_cvtsepi32_storeu_epi8function
2591core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi16function
2592core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi32function
2593core::core_arch::x86::avx512f_mm512_mask_cvtsepi64_storeu_epi8function
2594core::core_arch::x86::avx512f_mm512_mask_cvtusepi32_storeu_epi16function
2595core::core_arch::x86::avx512f_mm512_mask_cvtusepi32_storeu_epi8function
2596core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi16function
2597core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi32function
2598core::core_arch::x86::avx512f_mm512_mask_cvtusepi64_storeu_epi8function
2599core::core_arch::x86::avx512f_mm512_mask_expandloadu_epi32function
2600core::core_arch::x86::avx512f_mm512_mask_expandloadu_epi64function
2601core::core_arch::x86::avx512f_mm512_mask_expandloadu_pdfunction
2602core::core_arch::x86::avx512f_mm512_mask_expandloadu_psfunction
2603core::core_arch::x86::avx512f_mm512_mask_i32gather_epi32function
2604core::core_arch::x86::avx512f_mm512_mask_i32gather_epi64function
2605core::core_arch::x86::avx512f_mm512_mask_i32gather_pdfunction
2606core::core_arch::x86::avx512f_mm512_mask_i32gather_psfunction
2607core::core_arch::x86::avx512f_mm512_mask_i32logather_epi64function
2608core::core_arch::x86::avx512f_mm512_mask_i32logather_pdfunction
2609core::core_arch::x86::avx512f_mm512_mask_i32loscatter_epi64function
2610core::core_arch::x86::avx512f_mm512_mask_i32loscatter_pdfunction
2611core::core_arch::x86::avx512f_mm512_mask_i32scatter_epi32function
2612core::core_arch::x86::avx512f_mm512_mask_i32scatter_epi64function
2613core::core_arch::x86::avx512f_mm512_mask_i32scatter_pdfunction
2614core::core_arch::x86::avx512f_mm512_mask_i32scatter_psfunction
2615core::core_arch::x86::avx512f_mm512_mask_i64gather_epi32function
2616core::core_arch::x86::avx512f_mm512_mask_i64gather_epi64function
2617core::core_arch::x86::avx512f_mm512_mask_i64gather_pdfunction
2618core::core_arch::x86::avx512f_mm512_mask_i64gather_psfunction
2619core::core_arch::x86::avx512f_mm512_mask_i64scatter_epi32function
2620core::core_arch::x86::avx512f_mm512_mask_i64scatter_epi64function
2621core::core_arch::x86::avx512f_mm512_mask_i64scatter_pdfunction
2622core::core_arch::x86::avx512f_mm512_mask_i64scatter_psfunction
2623core::core_arch::x86::avx512f_mm512_mask_load_epi32function
2624core::core_arch::x86::avx512f_mm512_mask_load_epi64function
2625core::core_arch::x86::avx512f_mm512_mask_load_pdfunction
2626core::core_arch::x86::avx512f_mm512_mask_load_psfunction
2627core::core_arch::x86::avx512f_mm512_mask_loadu_epi32function
2628core::core_arch::x86::avx512f_mm512_mask_loadu_epi64function
2629core::core_arch::x86::avx512f_mm512_mask_loadu_pdfunction
2630core::core_arch::x86::avx512f_mm512_mask_loadu_psfunction
2631core::core_arch::x86::avx512f_mm512_mask_store_epi32function
2632core::core_arch::x86::avx512f_mm512_mask_store_epi64function
2633core::core_arch::x86::avx512f_mm512_mask_store_pdfunction
2634core::core_arch::x86::avx512f_mm512_mask_store_psfunction
2635core::core_arch::x86::avx512f_mm512_mask_storeu_epi32function
2636core::core_arch::x86::avx512f_mm512_mask_storeu_epi64function
2637core::core_arch::x86::avx512f_mm512_mask_storeu_pdfunction
2638core::core_arch::x86::avx512f_mm512_mask_storeu_psfunction
2639core::core_arch::x86::avx512f_mm512_maskz_expandloadu_epi32function
2640core::core_arch::x86::avx512f_mm512_maskz_expandloadu_epi64function
2641core::core_arch::x86::avx512f_mm512_maskz_expandloadu_pdfunction
2642core::core_arch::x86::avx512f_mm512_maskz_expandloadu_psfunction
2643core::core_arch::x86::avx512f_mm512_maskz_load_epi32function
2644core::core_arch::x86::avx512f_mm512_maskz_load_epi64function
2645core::core_arch::x86::avx512f_mm512_maskz_load_pdfunction
2646core::core_arch::x86::avx512f_mm512_maskz_load_psfunction
2647core::core_arch::x86::avx512f_mm512_maskz_loadu_epi32function
2648core::core_arch::x86::avx512f_mm512_maskz_loadu_epi64function
2649core::core_arch::x86::avx512f_mm512_maskz_loadu_pdfunction
2650core::core_arch::x86::avx512f_mm512_maskz_loadu_psfunction
2651core::core_arch::x86::avx512f_mm512_store_epi32function
2652core::core_arch::x86::avx512f_mm512_store_epi64function
2653core::core_arch::x86::avx512f_mm512_store_pdfunction
2654core::core_arch::x86::avx512f_mm512_store_psfunction
2655core::core_arch::x86::avx512f_mm512_store_si512function
2656core::core_arch::x86::avx512f_mm512_storeu_epi32function
2657core::core_arch::x86::avx512f_mm512_storeu_epi64function
2658core::core_arch::x86::avx512f_mm512_storeu_pdfunction
2659core::core_arch::x86::avx512f_mm512_storeu_psfunction
2660core::core_arch::x86::avx512f_mm512_storeu_si512function
2661core::core_arch::x86::avx512f_mm512_stream_load_si512function
2662core::core_arch::x86::avx512f_mm512_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2663core::core_arch::x86::avx512f_mm512_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2664core::core_arch::x86::avx512f_mm512_stream_si512functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2665core::core_arch::x86::avx512f_mm_i32scatter_epi32function
2666core::core_arch::x86::avx512f_mm_i32scatter_epi64function
2667core::core_arch::x86::avx512f_mm_i32scatter_pdfunction
2668core::core_arch::x86::avx512f_mm_i32scatter_psfunction
2669core::core_arch::x86::avx512f_mm_i64scatter_epi32function
2670core::core_arch::x86::avx512f_mm_i64scatter_epi64function
2671core::core_arch::x86::avx512f_mm_i64scatter_pdfunction
2672core::core_arch::x86::avx512f_mm_i64scatter_psfunction
2673core::core_arch::x86::avx512f_mm_load_epi32function
2674core::core_arch::x86::avx512f_mm_load_epi64function
2675core::core_arch::x86::avx512f_mm_loadu_epi32function
2676core::core_arch::x86::avx512f_mm_loadu_epi64function
2677core::core_arch::x86::avx512f_mm_mask_compressstoreu_epi32function
2678core::core_arch::x86::avx512f_mm_mask_compressstoreu_epi64function
2679core::core_arch::x86::avx512f_mm_mask_compressstoreu_pdfunction
2680core::core_arch::x86::avx512f_mm_mask_compressstoreu_psfunction
2681core::core_arch::x86::avx512f_mm_mask_cvtepi32_storeu_epi16function
2682core::core_arch::x86::avx512f_mm_mask_cvtepi32_storeu_epi8function
2683core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi16function
2684core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi32function
2685core::core_arch::x86::avx512f_mm_mask_cvtepi64_storeu_epi8function
2686core::core_arch::x86::avx512f_mm_mask_cvtsepi32_storeu_epi16function
2687core::core_arch::x86::avx512f_mm_mask_cvtsepi32_storeu_epi8function
2688core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi16function
2689core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi32function
2690core::core_arch::x86::avx512f_mm_mask_cvtsepi64_storeu_epi8function
2691core::core_arch::x86::avx512f_mm_mask_cvtusepi32_storeu_epi16function
2692core::core_arch::x86::avx512f_mm_mask_cvtusepi32_storeu_epi8function
2693core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi16function
2694core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi32function
2695core::core_arch::x86::avx512f_mm_mask_cvtusepi64_storeu_epi8function
2696core::core_arch::x86::avx512f_mm_mask_expandloadu_epi32function
2697core::core_arch::x86::avx512f_mm_mask_expandloadu_epi64function
2698core::core_arch::x86::avx512f_mm_mask_expandloadu_pdfunction
2699core::core_arch::x86::avx512f_mm_mask_expandloadu_psfunction
2700core::core_arch::x86::avx512f_mm_mask_i32scatter_epi32function
2701core::core_arch::x86::avx512f_mm_mask_i32scatter_epi64function
2702core::core_arch::x86::avx512f_mm_mask_i32scatter_pdfunction
2703core::core_arch::x86::avx512f_mm_mask_i32scatter_psfunction
2704core::core_arch::x86::avx512f_mm_mask_i64scatter_epi32function
2705core::core_arch::x86::avx512f_mm_mask_i64scatter_epi64function
2706core::core_arch::x86::avx512f_mm_mask_i64scatter_pdfunction
2707core::core_arch::x86::avx512f_mm_mask_i64scatter_psfunction
2708core::core_arch::x86::avx512f_mm_mask_load_epi32function
2709core::core_arch::x86::avx512f_mm_mask_load_epi64function
2710core::core_arch::x86::avx512f_mm_mask_load_pdfunction
2711core::core_arch::x86::avx512f_mm_mask_load_psfunction
2712core::core_arch::x86::avx512f_mm_mask_load_sdfunction
2713core::core_arch::x86::avx512f_mm_mask_load_ssfunction
2714core::core_arch::x86::avx512f_mm_mask_loadu_epi32function
2715core::core_arch::x86::avx512f_mm_mask_loadu_epi64function
2716core::core_arch::x86::avx512f_mm_mask_loadu_pdfunction
2717core::core_arch::x86::avx512f_mm_mask_loadu_psfunction
2718core::core_arch::x86::avx512f_mm_mask_store_epi32function
2719core::core_arch::x86::avx512f_mm_mask_store_epi64function
2720core::core_arch::x86::avx512f_mm_mask_store_pdfunction
2721core::core_arch::x86::avx512f_mm_mask_store_psfunction
2722core::core_arch::x86::avx512f_mm_mask_store_sdfunction
2723core::core_arch::x86::avx512f_mm_mask_store_ssfunction
2724core::core_arch::x86::avx512f_mm_mask_storeu_epi32function
2725core::core_arch::x86::avx512f_mm_mask_storeu_epi64function
2726core::core_arch::x86::avx512f_mm_mask_storeu_pdfunction
2727core::core_arch::x86::avx512f_mm_mask_storeu_psfunction
2728core::core_arch::x86::avx512f_mm_maskz_expandloadu_epi32function
2729core::core_arch::x86::avx512f_mm_maskz_expandloadu_epi64function
2730core::core_arch::x86::avx512f_mm_maskz_expandloadu_pdfunction
2731core::core_arch::x86::avx512f_mm_maskz_expandloadu_psfunction
2732core::core_arch::x86::avx512f_mm_maskz_load_epi32function
2733core::core_arch::x86::avx512f_mm_maskz_load_epi64function
2734core::core_arch::x86::avx512f_mm_maskz_load_pdfunction
2735core::core_arch::x86::avx512f_mm_maskz_load_psfunction
2736core::core_arch::x86::avx512f_mm_maskz_load_sdfunction
2737core::core_arch::x86::avx512f_mm_maskz_load_ssfunction
2738core::core_arch::x86::avx512f_mm_maskz_loadu_epi32function
2739core::core_arch::x86::avx512f_mm_maskz_loadu_epi64function
2740core::core_arch::x86::avx512f_mm_maskz_loadu_pdfunction
2741core::core_arch::x86::avx512f_mm_maskz_loadu_psfunction
2742core::core_arch::x86::avx512f_mm_mmask_i32gather_epi32function
2743core::core_arch::x86::avx512f_mm_mmask_i32gather_epi64function
2744core::core_arch::x86::avx512f_mm_mmask_i32gather_pdfunction
2745core::core_arch::x86::avx512f_mm_mmask_i32gather_psfunction
2746core::core_arch::x86::avx512f_mm_mmask_i64gather_epi32function
2747core::core_arch::x86::avx512f_mm_mmask_i64gather_epi64function
2748core::core_arch::x86::avx512f_mm_mmask_i64gather_pdfunction
2749core::core_arch::x86::avx512f_mm_mmask_i64gather_psfunction
2750core::core_arch::x86::avx512f_mm_store_epi32function
2751core::core_arch::x86::avx512f_mm_store_epi64function
2752core::core_arch::x86::avx512f_mm_storeu_epi32function
2753core::core_arch::x86::avx512f_mm_storeu_epi64function
2754core::core_arch::x86::avx512f_store_mask16function
2755core::core_arch::x86::avx512fp16_mm256_load_phfunction
2756core::core_arch::x86::avx512fp16_mm256_loadu_phfunction
2757core::core_arch::x86::avx512fp16_mm256_store_phfunction
2758core::core_arch::x86::avx512fp16_mm256_storeu_phfunction
2759core::core_arch::x86::avx512fp16_mm512_load_phfunction
2760core::core_arch::x86::avx512fp16_mm512_loadu_phfunction
2761core::core_arch::x86::avx512fp16_mm512_store_phfunction
2762core::core_arch::x86::avx512fp16_mm512_storeu_phfunction
2763core::core_arch::x86::avx512fp16_mm_load_phfunction
2764core::core_arch::x86::avx512fp16_mm_load_shfunction
2765core::core_arch::x86::avx512fp16_mm_loadu_phfunction
2766core::core_arch::x86::avx512fp16_mm_mask_load_shfunction
2767core::core_arch::x86::avx512fp16_mm_mask_store_shfunction
2768core::core_arch::x86::avx512fp16_mm_maskz_load_shfunction
2769core::core_arch::x86::avx512fp16_mm_store_phfunction
2770core::core_arch::x86::avx512fp16_mm_store_shfunction
2771core::core_arch::x86::avx512fp16_mm_storeu_phfunction
2772core::core_arch::x86::avx512vbmi2_mm256_mask_compressstoreu_epi16function
2773core::core_arch::x86::avx512vbmi2_mm256_mask_compressstoreu_epi8function
2774core::core_arch::x86::avx512vbmi2_mm256_mask_expandloadu_epi16function
2775core::core_arch::x86::avx512vbmi2_mm256_mask_expandloadu_epi8function
2776core::core_arch::x86::avx512vbmi2_mm256_maskz_expandloadu_epi16function
2777core::core_arch::x86::avx512vbmi2_mm256_maskz_expandloadu_epi8function
2778core::core_arch::x86::avx512vbmi2_mm512_mask_compressstoreu_epi16function
2779core::core_arch::x86::avx512vbmi2_mm512_mask_compressstoreu_epi8function
2780core::core_arch::x86::avx512vbmi2_mm512_mask_expandloadu_epi16function
2781core::core_arch::x86::avx512vbmi2_mm512_mask_expandloadu_epi8function
2782core::core_arch::x86::avx512vbmi2_mm512_maskz_expandloadu_epi16function
2783core::core_arch::x86::avx512vbmi2_mm512_maskz_expandloadu_epi8function
2784core::core_arch::x86::avx512vbmi2_mm_mask_compressstoreu_epi16function
2785core::core_arch::x86::avx512vbmi2_mm_mask_compressstoreu_epi8function
2786core::core_arch::x86::avx512vbmi2_mm_mask_expandloadu_epi16function
2787core::core_arch::x86::avx512vbmi2_mm_mask_expandloadu_epi8function
2788core::core_arch::x86::avx512vbmi2_mm_maskz_expandloadu_epi16function
2789core::core_arch::x86::avx512vbmi2_mm_maskz_expandloadu_epi8function
2790core::core_arch::x86::avxneconvert_mm256_bcstnebf16_psfunction
2791core::core_arch::x86::avxneconvert_mm256_bcstnesh_psfunction
2792core::core_arch::x86::avxneconvert_mm256_cvtneebf16_psfunction
2793core::core_arch::x86::avxneconvert_mm256_cvtneeph_psfunction
2794core::core_arch::x86::avxneconvert_mm256_cvtneobf16_psfunction
2795core::core_arch::x86::avxneconvert_mm256_cvtneoph_psfunction
2796core::core_arch::x86::avxneconvert_mm_bcstnebf16_psfunction
2797core::core_arch::x86::avxneconvert_mm_bcstnesh_psfunction
2798core::core_arch::x86::avxneconvert_mm_cvtneebf16_psfunction
2799core::core_arch::x86::avxneconvert_mm_cvtneeph_psfunction
2800core::core_arch::x86::avxneconvert_mm_cvtneobf16_psfunction
2801core::core_arch::x86::avxneconvert_mm_cvtneoph_psfunction
2802core::core_arch::x86::bt_bittestfunction
2803core::core_arch::x86::bt_bittestandcomplementfunction
2804core::core_arch::x86::bt_bittestandresetfunction
2805core::core_arch::x86::bt_bittestandsetfunction
2806core::core_arch::x86::fxsr_fxrstorfunction
2807core::core_arch::x86::fxsr_fxsavefunction
2808core::core_arch::x86::kl_mm_aesdec128kl_u8function
2809core::core_arch::x86::kl_mm_aesdec256kl_u8function
2810core::core_arch::x86::kl_mm_aesdecwide128kl_u8function
2811core::core_arch::x86::kl_mm_aesdecwide256kl_u8function
2812core::core_arch::x86::kl_mm_aesenc128kl_u8function
2813core::core_arch::x86::kl_mm_aesenc256kl_u8function
2814core::core_arch::x86::kl_mm_aesencwide128kl_u8function
2815core::core_arch::x86::kl_mm_aesencwide256kl_u8function
2816core::core_arch::x86::kl_mm_encodekey128_u32function
2817core::core_arch::x86::kl_mm_encodekey256_u32function
2818core::core_arch::x86::kl_mm_loadiwkeyfunction
2819core::core_arch::x86::rdtsc__rdtscpfunction
2820core::core_arch::x86::rdtsc_rdtscfunction
2821core::core_arch::x86::rtm_xabortfunction
2822core::core_arch::x86::rtm_xbeginfunction
2823core::core_arch::x86::rtm_xendfunction
2824core::core_arch::x86::rtm_xtestfunction
2825core::core_arch::x86::sse_MM_GET_EXCEPTION_MASKfunction
2826core::core_arch::x86::sse_MM_GET_EXCEPTION_STATEfunction
2827core::core_arch::x86::sse_MM_GET_FLUSH_ZERO_MODEfunction
2828core::core_arch::x86::sse_MM_GET_ROUNDING_MODEfunction
2829core::core_arch::x86::sse_MM_SET_EXCEPTION_MASKfunction
2830core::core_arch::x86::sse_MM_SET_EXCEPTION_STATEfunction
2831core::core_arch::x86::sse_MM_SET_FLUSH_ZERO_MODEfunction
2832core::core_arch::x86::sse_MM_SET_ROUNDING_MODEfunction
2833core::core_arch::x86::sse_mm_getcsrfunction
2834core::core_arch::x86::sse_mm_load1_psfunction
2835core::core_arch::x86::sse_mm_load_psfunction
2836core::core_arch::x86::sse_mm_load_ps1function
2837core::core_arch::x86::sse_mm_load_ssfunction
2838core::core_arch::x86::sse_mm_loadr_psfunction
2839core::core_arch::x86::sse_mm_loadu_psfunction
2840core::core_arch::x86::sse_mm_setcsrfunction
2841core::core_arch::x86::sse_mm_store1_psfunction
2842core::core_arch::x86::sse_mm_store_psfunction
2843core::core_arch::x86::sse_mm_store_ps1function
2844core::core_arch::x86::sse_mm_store_ssfunction
2845core::core_arch::x86::sse_mm_storer_psfunction
2846core::core_arch::x86::sse_mm_storeu_psfunction
2847core::core_arch::x86::sse_mm_stream_psfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2848core::core_arch::x86::sse2_mm_clflushfunction
2849core::core_arch::x86::sse2_mm_load1_pdfunction
2850core::core_arch::x86::sse2_mm_load_pdfunction
2851core::core_arch::x86::sse2_mm_load_pd1function
2852core::core_arch::x86::sse2_mm_load_sdfunction
2853core::core_arch::x86::sse2_mm_load_si128function
2854core::core_arch::x86::sse2_mm_loadh_pdfunction
2855core::core_arch::x86::sse2_mm_loadl_epi64function
2856core::core_arch::x86::sse2_mm_loadl_pdfunction
2857core::core_arch::x86::sse2_mm_loadr_pdfunction
2858core::core_arch::x86::sse2_mm_loadu_pdfunction
2859core::core_arch::x86::sse2_mm_loadu_si128function
2860core::core_arch::x86::sse2_mm_loadu_si16function
2861core::core_arch::x86::sse2_mm_loadu_si32function
2862core::core_arch::x86::sse2_mm_loadu_si64function
2863core::core_arch::x86::sse2_mm_maskmoveu_si128functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2864core::core_arch::x86::sse2_mm_store1_pdfunction
2865core::core_arch::x86::sse2_mm_store_pdfunction
2866core::core_arch::x86::sse2_mm_store_pd1function
2867core::core_arch::x86::sse2_mm_store_sdfunction
2868core::core_arch::x86::sse2_mm_store_si128function
2869core::core_arch::x86::sse2_mm_storeh_pdfunction
2870core::core_arch::x86::sse2_mm_storel_epi64function
2871core::core_arch::x86::sse2_mm_storel_pdfunction
2872core::core_arch::x86::sse2_mm_storer_pdfunction
2873core::core_arch::x86::sse2_mm_storeu_pdfunction
2874core::core_arch::x86::sse2_mm_storeu_si128function
2875core::core_arch::x86::sse2_mm_storeu_si16function
2876core::core_arch::x86::sse2_mm_storeu_si32function
2877core::core_arch::x86::sse2_mm_storeu_si64function
2878core::core_arch::x86::sse2_mm_stream_pdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2879core::core_arch::x86::sse2_mm_stream_si128functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2880core::core_arch::x86::sse2_mm_stream_si32functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2881core::core_arch::x86::sse3_mm_lddqu_si128function
2882core::core_arch::x86::sse3_mm_loaddup_pdfunction
2883core::core_arch::x86::sse41_mm_stream_load_si128function
2884core::core_arch::x86::sse4a_mm_stream_sdfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2885core::core_arch::x86::sse4a_mm_stream_ssfunctionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2886core::core_arch::x86::xsave_xgetbvfunction
2887core::core_arch::x86::xsave_xrstorfunction
2888core::core_arch::x86::xsave_xrstorsfunction
2889core::core_arch::x86::xsave_xsavefunction
2890core::core_arch::x86::xsave_xsavecfunction
2891core::core_arch::x86::xsave_xsaveoptfunction
2892core::core_arch::x86::xsave_xsavesfunction
2893core::core_arch::x86::xsave_xsetbvfunction
2894core::core_arch::x86_64::amx_tile_cmmimfp16psfunction
2895core::core_arch::x86_64::amx_tile_cmmrlfp16psfunction
2896core::core_arch::x86_64::amx_tile_cvtrowd2psfunction
2897core::core_arch::x86_64::amx_tile_cvtrowps2phhfunction
2898core::core_arch::x86_64::amx_tile_cvtrowps2phlfunction
2899core::core_arch::x86_64::amx_tile_dpbf16psfunction
2900core::core_arch::x86_64::amx_tile_dpbf8psfunction
2901core::core_arch::x86_64::amx_tile_dpbhf8psfunction
2902core::core_arch::x86_64::amx_tile_dpbssdfunction
2903core::core_arch::x86_64::amx_tile_dpbsudfunction
2904core::core_arch::x86_64::amx_tile_dpbusdfunction
2905core::core_arch::x86_64::amx_tile_dpbuudfunction
2906core::core_arch::x86_64::amx_tile_dpfp16psfunction
2907core::core_arch::x86_64::amx_tile_dphbf8psfunction
2908core::core_arch::x86_64::amx_tile_dphf8psfunction
2909core::core_arch::x86_64::amx_tile_loadconfigfunction
2910core::core_arch::x86_64::amx_tile_loaddfunction
2911core::core_arch::x86_64::amx_tile_loaddrsfunction
2912core::core_arch::x86_64::amx_tile_mmultf32psfunction
2913core::core_arch::x86_64::amx_tile_movrowfunction
2914core::core_arch::x86_64::amx_tile_releasefunction
2915core::core_arch::x86_64::amx_tile_storeconfigfunction
2916core::core_arch::x86_64::amx_tile_storedfunction
2917core::core_arch::x86_64::amx_tile_stream_loaddfunction
2918core::core_arch::x86_64::amx_tile_stream_loaddrsfunction
2919core::core_arch::x86_64::amx_tile_zerofunction
2920core::core_arch::x86_64::bt_bittest64function
2921core::core_arch::x86_64::bt_bittestandcomplement64function
2922core::core_arch::x86_64::bt_bittestandreset64function
2923core::core_arch::x86_64::bt_bittestandset64function
2924core::core_arch::x86_64::cmpxchg16bcmpxchg16bfunction
2925core::core_arch::x86_64::fxsr_fxrstor64function
2926core::core_arch::x86_64::fxsr_fxsave64function
2927core::core_arch::x86_64::sse2_mm_stream_si64functionAfter using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details.
2928core::core_arch::x86_64::xsave_xrstor64function
2929core::core_arch::x86_64::xsave_xrstors64function
2930core::core_arch::x86_64::xsave_xsave64function
2931core::core_arch::x86_64::xsave_xsavec64function
2932core::core_arch::x86_64::xsave_xsaveopt64function
2933core::core_arch::x86_64::xsave_xsaves64function
2934core::core_simd::cast::sealedSealedtraitImplementing this trait asserts that the type is a valid vector element for the `simd_cast` or `simd_as` intrinsics.
2935core::core_simd::masksMaskElementtraitType must be a signed integer.
2936core::core_simd::masks::Maskfrom_simd_uncheckedfunctionAll elements must be either 0 or -1.
2937core::core_simd::masks::Maskset_uncheckedfunction`index` must be less than `self.len()`.
2938core::core_simd::masks::Masktest_uncheckedfunction`index` must be less than `self.len()`.
2939core::core_simd::vectorSimdElementtraitThis trait, when implemented, asserts the compiler can monomorphize `#[repr(simd)]` structs with the marked type as an element. Strictly, it is valid to impl if the vector will not be miscompiled. Practically, it is user-unfriendly to impl it if the vector won't compile, even when no soundness guarantees are broken by allowing the user to try.
2940core::core_simd::vector::Simdgather_ptrfunctionEach read must satisfy the same conditions as [`core::ptr::read`].
2941core::core_simd::vector::Simdgather_select_ptrfunctionEnabled elements must satisfy the same conditions as [`core::ptr::read`].
2942core::core_simd::vector::Simdgather_select_uncheckedfunctionCalling this function with an `enable`d out-of-bounds index is *[undefined behavior]* even if the resulting value is not used.
2943core::core_simd::vector::Simdload_select_ptrfunctionEnabled `ptr` elements must be safe to read as if by `core::ptr::read`.
2944core::core_simd::vector::Simdload_select_uncheckedfunctionEnabled loads must not exceed the length of `slice`.
2945core::core_simd::vector::Simdscatter_ptrfunctionEach write must satisfy the same conditions as [`core::ptr::write`].
2946core::core_simd::vector::Simdscatter_select_ptrfunctionEnabled pointers must satisfy the same conditions as [`core::ptr::write`].
2947core::core_simd::vector::Simdscatter_select_uncheckedfunctionCalling this function with an enabled out-of-bounds index is *[undefined behavior]*, and may lead to memory corruption.
2948core::core_simd::vector::Simdstore_select_ptrfunctionMemory addresses for element are calculated [`pointer::wrapping_offset`] and each enabled element must satisfy the same conditions as [`core::ptr::write`].
2949core::core_simd::vector::Simdstore_select_uncheckedfunctionEvery enabled element must be in bounds for the `slice`.
2950core::f128to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
2951core::f16to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
2952core::f32to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
2953core::f64to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
2954core::ffi::c_str::CStrfrom_bytes_with_nul_uncheckedfunctionThe provided slice **must** be nul-terminated and not contain any interior nul bytes.
2955core::ffi::c_str::CStrfrom_ptrfunction* The memory pointed to by `ptr` must contain a valid nul terminator at the end of the string. * `ptr` must be [valid] for reads of bytes up to and including the nul terminator. This means in particular: * The entire memory range of this `CStr` must be contained within a single allocation! * `ptr` must be non-null even for a zero-length cstr. * The memory referenced by the returned `CStr` must not be mutated for the duration of lifetime `'a`. * The nul terminator must be within `isize::MAX` from `ptr` > **Note**: This operation is intended to be a 0-cost cast but it is > currently implemented with an up-front calculation of the length of > the string. This is not guaranteed to always be the case.
2956core::ffi::va_listVaArgSafetraitThe standard library implements this trait for primitive types that are expected to have a variable argument application-binary interface (ABI) on all platforms. When C passes variable arguments, integers smaller than [`c_int`] and floats smaller than [`c_double`] are implicitly promoted to [`c_int`] and [`c_double`] respectively. Implementing this trait for types that are subject to this promotion rule is invalid. [`c_int`]: core::ffi::c_int [`c_double`]: core::ffi::c_double
2957core::ffi::va_list::VaListargfunctionThis function is only sound to call when there is another argument to read, and that argument is a properly initialized value of the type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound.
2958core::fieldFieldtraitGiven a valid value of type `Self::Base`, there exists a valid value of type `Self::Type` at byte offset `OFFSET`
2959core::future::async_dropasync_drop_in_placefunction
2960core::hintassert_uncheckedfunction`cond` must be `true`. It is immediate UB to call this with `false`.
2961core::hintunreachable_uncheckedfunctionReaching this function is *Undefined Behavior*. As the compiler assumes that all forms of Undefined Behavior can never happen, it will eliminate all branches in the surrounding code that it can determine will invariably lead to a call to `unreachable_unchecked()`. If the assumptions embedded in using this function turn out to be wrong - that is, if the site which is calling `unreachable_unchecked()` is actually reachable at runtime - the compiler may have generated nonsensical machine instructions for this situation, including in seemingly unrelated code, causing difficult-to-debug problems. Use this function sparingly. Consider using the [`unreachable!`] macro, which may prevent some optimizations but will safely panic in case it is actually reached at runtime. Benchmark your code to find out if using `unreachable_unchecked()` comes with a performance benefit.
2962core::i128unchecked_addfunctionThis results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add
2963core::i128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
2964core::i128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul
2965core::i128unchecked_negfunctionThis results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg
2966core::i128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl
2967core::i128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`.
2968core::i128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr
2969core::i128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`.
2970core::i128unchecked_subfunctionThis results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub
2971core::i16unchecked_addfunctionThis results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add
2972core::i16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
2973core::i16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul
2974core::i16unchecked_negfunctionThis results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg
2975core::i16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl
2976core::i16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`.
2977core::i16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr
2978core::i16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`.
2979core::i16unchecked_subfunctionThis results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub
2980core::i32unchecked_addfunctionThis results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add
2981core::i32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
2982core::i32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul
2983core::i32unchecked_negfunctionThis results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg
2984core::i32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl
2985core::i32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`.
2986core::i32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr
2987core::i32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`.
2988core::i32unchecked_subfunctionThis results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub
2989core::i64unchecked_addfunctionThis results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add
2990core::i64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
2991core::i64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul
2992core::i64unchecked_negfunctionThis results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg
2993core::i64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl
2994core::i64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`.
2995core::i64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr
2996core::i64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`.
2997core::i64unchecked_subfunctionThis results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub
2998core::i8unchecked_addfunctionThis results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add
2999core::i8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3000core::i8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul
3001core::i8unchecked_negfunctionThis results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg
3002core::i8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl
3003core::i8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`.
3004core::i8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr
3005core::i8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`.
3006core::i8unchecked_subfunctionThis results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub
3007core::intrinsicsalign_of_valfunctionSee [`crate::mem::align_of_val_raw`] for safety conditions.
3008core::intrinsicsarith_offsetfunctionUnlike the `offset` intrinsic, this intrinsic does not restrict the resulting pointer to point into or at the end of an allocated object, and it wraps with two's complement arithmetic. The resulting value is not necessarily valid to be used to actually access memory. The stabilized version of this intrinsic is [`pointer::wrapping_offset`].
3009core::intrinsicsassumefunction
3010core::intrinsicsatomic_andfunction
3011core::intrinsicsatomic_cxchgfunction
3012core::intrinsicsatomic_cxchgweakfunction
3013core::intrinsicsatomic_fencefunction
3014core::intrinsicsatomic_loadfunction
3015core::intrinsicsatomic_maxfunction
3016core::intrinsicsatomic_minfunction
3017core::intrinsicsatomic_nandfunction
3018core::intrinsicsatomic_orfunction
3019core::intrinsicsatomic_singlethreadfencefunction
3020core::intrinsicsatomic_storefunction
3021core::intrinsicsatomic_umaxfunction
3022core::intrinsicsatomic_uminfunction
3023core::intrinsicsatomic_xaddfunction
3024core::intrinsicsatomic_xchgfunction
3025core::intrinsicsatomic_xorfunction
3026core::intrinsicsatomic_xsubfunction
3027core::intrinsicscatch_unwindfunction
3028core::intrinsicscompare_bytesfunction`left` and `right` must each be [valid] for reads of `bytes` bytes. Note that this applies to the whole range, not just until the first byte that differs. That allows optimizations that can read in large chunks. [valid]: crate::ptr#safety
3029core::intrinsicsconst_allocatefunction- The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked.
3030core::intrinsicsconst_deallocatefunction- The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked. - If the `ptr` is created in an another const, this intrinsic doesn't deallocate it. - If the `ptr` is pointing to a local variable, this intrinsic doesn't deallocate it.
3031core::intrinsicsconst_make_globalfunction
3032core::intrinsicscopyfunction
3033core::intrinsicscopy_nonoverlappingfunction
3034core::intrinsicsctlz_nonzerofunction
3035core::intrinsicscttz_nonzerofunction
3036core::intrinsicsdisjoint_bitorfunctionRequires that `(a & b) == 0`, or equivalently that `(a | b) == (a + b)`. Otherwise it's immediate UB.
3037core::intrinsicsexact_divfunction
3038core::intrinsicsfadd_fastfunction
3039core::intrinsicsfdiv_fastfunction
3040core::intrinsicsfloat_to_int_uncheckedfunction
3041core::intrinsicsfmul_fastfunction
3042core::intrinsicsfrem_fastfunction
3043core::intrinsicsfsub_fastfunction
3044core::intrinsicsnontemporal_storefunction
3045core::intrinsicsoffsetfunctionIf the computed offset is non-zero, then both the starting and resulting pointer must be either in bounds or at the end of an allocation. If either pointer is out of bounds or arithmetic overflow occurs then this operation is undefined behavior. The stabilized version of this intrinsic is [`pointer::offset`].
3046core::intrinsicsptr_offset_fromfunction
3047core::intrinsicsptr_offset_from_unsignedfunction
3048core::intrinsicsraw_eqfunctionIt's UB to call this if any of the *bytes* in `*a` or `*b` are uninitialized. Note that this is a stricter criterion than just the *values* being fully-initialized: if `T` has padding, it's UB to call this intrinsic. At compile-time, it is furthermore UB to call this if any of the bytes in `*a` or `*b` have provenance. (The implementation is allowed to branch on the results of comparisons, which is UB if any of their inputs are `undef`.)
3049core::intrinsicsread_via_copyfunction
3050core::intrinsicssize_of_valfunctionSee [`crate::mem::size_of_val_raw`] for safety conditions.
3051core::intrinsicsslice_get_uncheckedfunction- `index < PtrMetadata(slice_ptr)`, so the indexing is in-bounds for the slice - the resulting offsetting is in-bounds of the allocation, which is always the case for references, but needs to be upheld manually for pointers
3052core::intrinsicstransmutefunction
3053core::intrinsicstransmute_uncheckedfunction
3054core::intrinsicstyped_swap_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` must *not* overlap with the region of memory beginning at `y`. * The memory pointed by `x` and `y` must both contain values of type `T`. [valid]: crate::ptr#safety
3055core::intrinsicsunaligned_volatile_loadfunction
3056core::intrinsicsunaligned_volatile_storefunction
3057core::intrinsicsunchecked_addfunction
3058core::intrinsicsunchecked_divfunction
3059core::intrinsicsunchecked_funnel_shlfunction
3060core::intrinsicsunchecked_funnel_shrfunction
3061core::intrinsicsunchecked_mulfunction
3062core::intrinsicsunchecked_remfunction
3063core::intrinsicsunchecked_shlfunction
3064core::intrinsicsunchecked_shrfunction
3065core::intrinsicsunchecked_subfunction
3066core::intrinsicsunreachablefunction
3067core::intrinsicsva_argfunctionThis function is only sound to call when: - there is a next variable argument available. - the next argument's type must be ABI-compatible with the type `T`. - the next argument must have a properly initialized value of type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound.
3068core::intrinsicsva_endfunction`ap` must not be used to access variable arguments after this call.
3069core::intrinsicsvolatile_copy_memoryfunction
3070core::intrinsicsvolatile_copy_nonoverlapping_memoryfunctionThe safety requirements are consistent with [`copy_nonoverlapping`] while the read and write behaviors are volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`copy_nonoverlapping`]: ptr::copy_nonoverlapping
3071core::intrinsicsvolatile_loadfunction
3072core::intrinsicsvolatile_set_memoryfunctionThe safety requirements are consistent with [`write_bytes`] while the write behavior is volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`write_bytes`]: ptr::write_bytes
3073core::intrinsicsvolatile_storefunction
3074core::intrinsicsvtable_alignfunction`ptr` must point to a vtable.
3075core::intrinsicsvtable_sizefunction`ptr` must point to a vtable.
3076core::intrinsicswrite_bytesfunction
3077core::intrinsicswrite_via_movefunction
3078core::intrinsics::boundsBuiltinDereftraitMust actually *be* such a type.
3079core::intrinsics::simdsimd_addfunction
3080core::intrinsics::simdsimd_andfunction
3081core::intrinsics::simdsimd_arith_offsetfunction
3082core::intrinsics::simdsimd_asfunction
3083core::intrinsics::simdsimd_bitmaskfunction`x` must contain only `0` and `!0`.
3084core::intrinsics::simdsimd_bitreversefunction
3085core::intrinsics::simdsimd_bswapfunction
3086core::intrinsics::simdsimd_carryless_mulfunction
3087core::intrinsics::simdsimd_castfunctionCasting from integer types is always safe. Casting between two float types is also always safe. Casting floats to integers truncates, following the same rules as `to_int_unchecked`. Specifically, each element must: * Not be `NaN` * Not be infinite * Be representable in the return type, after truncating off its fractional part
3088core::intrinsics::simdsimd_cast_ptrfunction
3089core::intrinsics::simdsimd_ceilfunction
3090core::intrinsics::simdsimd_ctlzfunction
3091core::intrinsics::simdsimd_ctpopfunction
3092core::intrinsics::simdsimd_cttzfunction
3093core::intrinsics::simdsimd_divfunctionFor integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior.
3094core::intrinsics::simdsimd_eqfunction
3095core::intrinsics::simdsimd_expose_provenancefunction
3096core::intrinsics::simdsimd_extractfunction`idx` must be const and in-bounds of the vector.
3097core::intrinsics::simdsimd_extract_dynfunction`idx` must be in-bounds of the vector.
3098core::intrinsics::simdsimd_fabsfunction
3099core::intrinsics::simdsimd_fcosfunction
3100core::intrinsics::simdsimd_fexpfunction
3101core::intrinsics::simdsimd_fexp2function
3102core::intrinsics::simdsimd_flogfunction
3103core::intrinsics::simdsimd_flog10function
3104core::intrinsics::simdsimd_flog2function
3105core::intrinsics::simdsimd_floorfunction
3106core::intrinsics::simdsimd_fmafunction
3107core::intrinsics::simdsimd_fmaxfunction
3108core::intrinsics::simdsimd_fminfunction
3109core::intrinsics::simdsimd_fsinfunction
3110core::intrinsics::simdsimd_fsqrtfunction
3111core::intrinsics::simdsimd_funnel_shlfunctionEach element of `shift` must be less than `<int>::BITS`.
3112core::intrinsics::simdsimd_funnel_shrfunctionEach element of `shift` must be less than `<int>::BITS`.
3113core::intrinsics::simdsimd_gatherfunctionUnmasked values in `T` must be readable as if by `<ptr>::read` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values.
3114core::intrinsics::simdsimd_gefunction
3115core::intrinsics::simdsimd_gtfunction
3116core::intrinsics::simdsimd_insertfunction`idx` must be in-bounds of the vector.
3117core::intrinsics::simdsimd_insert_dynfunction`idx` must be in-bounds of the vector.
3118core::intrinsics::simdsimd_lefunction
3119core::intrinsics::simdsimd_ltfunction
3120core::intrinsics::simdsimd_masked_loadfunction`ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values.
3121core::intrinsics::simdsimd_masked_storefunction`ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values.
3122core::intrinsics::simdsimd_mulfunction
3123core::intrinsics::simdsimd_nefunction
3124core::intrinsics::simdsimd_negfunction
3125core::intrinsics::simdsimd_orfunction
3126core::intrinsics::simdsimd_reduce_add_orderedfunction
3127core::intrinsics::simdsimd_reduce_add_unorderedfunction
3128core::intrinsics::simdsimd_reduce_allfunction`x` must contain only `0` or `!0`.
3129core::intrinsics::simdsimd_reduce_andfunction
3130core::intrinsics::simdsimd_reduce_anyfunction`x` must contain only `0` or `!0`.
3131core::intrinsics::simdsimd_reduce_maxfunction
3132core::intrinsics::simdsimd_reduce_minfunction
3133core::intrinsics::simdsimd_reduce_mul_orderedfunction
3134core::intrinsics::simdsimd_reduce_mul_unorderedfunction
3135core::intrinsics::simdsimd_reduce_orfunction
3136core::intrinsics::simdsimd_reduce_xorfunction
3137core::intrinsics::simdsimd_relaxed_fmafunction
3138core::intrinsics::simdsimd_remfunctionFor integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior.
3139core::intrinsics::simdsimd_roundfunction
3140core::intrinsics::simdsimd_round_ties_evenfunction
3141core::intrinsics::simdsimd_saturating_addfunction
3142core::intrinsics::simdsimd_saturating_subfunction
3143core::intrinsics::simdsimd_scatterfunctionUnmasked values in `T` must be writeable as if by `<ptr>::write` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values.
3144core::intrinsics::simdsimd_selectfunction`mask` must only contain `0` and `!0`.
3145core::intrinsics::simdsimd_select_bitmaskfunction
3146core::intrinsics::simdsimd_shlfunctionEach element of `rhs` must be less than `<int>::BITS`.
3147core::intrinsics::simdsimd_shrfunctionEach element of `rhs` must be less than `<int>::BITS`.
3148core::intrinsics::simdsimd_shufflefunction
3149core::intrinsics::simdsimd_splatfunction
3150core::intrinsics::simdsimd_subfunction
3151core::intrinsics::simdsimd_truncfunction
3152core::intrinsics::simdsimd_with_exposed_provenancefunction
3153core::intrinsics::simdsimd_xorfunction
3154core::io::borrowed_buf::BorrowedBufset_initfunctionThe caller must ensure that the first `n` unfilled bytes of the buffer have already been initialized.
3155core::io::borrowed_buf::BorrowedCursoradvance_uncheckedfunctionThe caller must ensure that the first `n` bytes of the cursor have been properly initialised.
3156core::io::borrowed_buf::BorrowedCursoras_mutfunctionThe caller must not uninitialize any bytes in the initialized portion of the cursor.
3157core::io::borrowed_buf::BorrowedCursorset_initfunctionThe caller must ensure that the first `n` bytes of the buffer have already been initialized.
3158core::isizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add
3159core::isizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3160core::isizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul
3161core::isizeunchecked_negfunctionThis results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg
3162core::isizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl
3163core::isizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`.
3164core::isizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr
3165core::isizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`.
3166core::isizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub
3167core::iter::traits::markerTrustedLentraitThis trait must only be implemented when the contract is upheld. Consumers of this trait must inspect [`Iterator::size_hint()`]’s upper bound.
3168core::iter::traits::markerTrustedSteptraitThe implementation of [`Step`] for the given type must guarantee all invariants of all methods are upheld. See the [`Step`] trait's documentation for details. Consumers are free to rely on the invariants in unsafe code.
3169core::markerFreezetraitThis trait is a core part of the language, it is just expressed as a trait in libcore for convenience. Do *not* implement it for other types.
3170core::markerSendtrait
3171core::markerSynctrait
3172core::markerUnsafeUnpintrait
3173core::memalign_of_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`align_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
3174core::memconjure_zstfunction- `T` must be *[inhabited]*, i.e. possible to construct. This means that types like zero-variant enums and [`!`] are unsound to conjure. - You must use the value only in ways which do not violate any *safety* invariants of the type. While it's easy to create a *valid* instance of an inhabited ZST, since having no bits in its representation means there's only one possible value, that doesn't mean that it's always *sound* to do so. For example, a library could design zero-sized tokens that are `!Default + !Clone`, limiting their creation to functions that initialize some state or establish a scope. Conjuring such a token could break invariants and lead to unsoundness.
3175core::memsize_of_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`size_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [`size_of::<T>()`]: size_of [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
3176core::memtransmute_copyfunction
3177core::memuninitializedfunction
3178core::memzeroedfunction
3179core::mem::manually_drop::ManuallyDropdropfunctionThis function runs the destructor of the contained value. Other than changes made by the destructor itself, the memory is left unchanged, and so as far as the compiler is concerned still holds a bit-pattern which is valid for the type `T`. However, this "zombie" value should not be exposed to safe code, and this function should not be called more than once. To use a value after it's been dropped, or drop a value multiple times, can cause Undefined Behavior (depending on what `drop` does). This is normally prevented by the type system, but users of `ManuallyDrop` must uphold those guarantees without assistance from the compiler. [pinned]: crate::pin
3180core::mem::manually_drop::ManuallyDroptakefunctionThis function semantically moves out the contained value without preventing further usage, leaving the state of this container unchanged. It is your responsibility to ensure that this `ManuallyDrop` is not used again.
3181core::mem::maybe_uninit::MaybeUninitarray_assume_initfunctionIt is up to the caller to guarantee that all elements of the array are in an initialized state.
3182core::mem::maybe_uninit::MaybeUninitassume_initfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. [inv]: #initialization-invariant On top of that, remember that most types have additional invariants beyond merely being considered initialized at the type level. For example, a `1`-initialized [`Vec<T>`] is considered initialized (under the current implementation; this does not constitute a stable guarantee) because the only requirement the compiler knows about it is that the data pointer must be non-null. Creating such a `Vec<T>` does not cause *immediate* undefined behavior, but will cause undefined behavior with most safe operations (including dropping it). [`Vec<T>`]: ../../std/vec/struct.Vec.html
3183core::mem::maybe_uninit::MaybeUninitassume_init_dropfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behavior. [`assume_init`]: MaybeUninit::assume_init
3184core::mem::maybe_uninit::MaybeUninitassume_init_mutfunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit`.
3185core::mem::maybe_uninit::MaybeUninitassume_init_readfunctionIt is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. Moreover, similar to the [`ptr::read`] function, this function creates a bitwise copy of the contents, regardless whether the contained type implements the [`Copy`] trait or not. When using multiple copies of the data (by calling `assume_init_read` multiple times, or first calling `assume_init_read` and then [`assume_init`]), it is your responsibility to ensure that data may indeed be duplicated. [inv]: #initialization-invariant [`assume_init`]: MaybeUninit::assume_init
3186core::mem::maybe_uninit::MaybeUninitassume_init_reffunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state.
3187core::mem::transmutabilityTransmuteFromtraitIf `Dst: TransmuteFrom<Src, ASSUMPTIONS>`, the compiler guarantees that `Src` is soundly *union-transmutable* into a value of type `Dst`, provided that the programmer has guaranteed that the given [`ASSUMPTIONS`](Assume) are satisfied. A union-transmute is any bit-reinterpretation conversion in the form of: ```rust pub unsafe fn transmute_via_union<Src, Dst>(src: Src) -> Dst { use core::mem::ManuallyDrop; #[repr(C)] union Transmute<Src, Dst> { src: ManuallyDrop<Src>, dst: ManuallyDrop<Dst>, } let transmute = Transmute { src: ManuallyDrop::new(src), }; let dst = unsafe { transmute.dst }; ManuallyDrop::into_inner(dst) } ``` Note that this construction is more permissive than [`mem::transmute_copy`](super::transmute_copy); union-transmutes permit conversions that extend the bits of `Src` with trailing padding to fill trailing uninitialized bytes of `Self`; e.g.: ```rust #![feature(transmutability)] use core::mem::{Assume, TransmuteFrom}; let src = 42u8; // size = 1 #[repr(C, align(2))] struct Dst(u8); // size = 2 let _ = unsafe { <Dst as TransmuteFrom<u8, { Assume::SAFETY }>>::transmute(src) }; ```
3188core::num::nonzeroZeroablePrimitivetraitTypes implementing this trait must be primitives that are valid when zeroed. The associated `Self::NonZeroInner` type must have the same size+align as `Self`, but with a niche and bit validity making it so the following `transmutes` are sound: - `Self::NonZeroInner` to `Option<Self::NonZeroInner>` - `Option<Self::NonZeroInner>` to `Self` (And, consequently, `Self::NonZeroInner` to `Self`.)
3189core::num::nonzero::NonZerofrom_mut_uncheckedfunctionThe referenced value must not be zero.
3190core::num::nonzero::NonZeronew_uncheckedfunctionThe value must not be zero.
3191core::num::nonzero::NonZerounchecked_addfunction
3192core::num::nonzero::NonZerounchecked_mulfunction
3193core::ops::derefDerefPuretrait
3194core::option::Optionunwrap_uncheckedfunctionCalling this method on [`None`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3195core::pinPinCoerceUnsizedtraitIf this type implements `Deref`, then the concrete type returned by `deref` and `deref_mut` must not change without a modification. The following operations are not considered modifications: * Moving the pointer. * Performing unsizing coercions on the pointer. * Performing dynamic dispatch with the pointer. * Calling `deref` or `deref_mut` on the pointer. The concrete type of a trait object is the type that the vtable corresponds to. The concrete type of a slice is an array of the same element type and the length specified in the metadata. The concrete type of a sized type is the type itself.
3196core::pin::Pinget_unchecked_mutfunctionThis function is unsafe. You must guarantee that you will never move the data out of the mutable reference you receive when you call this function, so that the invariants on the `Pin` type can be upheld. If the underlying data is `Unpin`, `Pin::get_mut` should be used instead.
3197core::pin::Pininto_inner_uncheckedfunctionThis function is unsafe. You must guarantee that you will continue to treat the pointer `Ptr` as pinned after you call this function, so that the invariants on the `Pin` type can be upheld. If the code using the resulting `Ptr` does not continue to maintain the pinning invariants that is a violation of the API contract and may lead to undefined behavior in later (safe) operations. Note that you must be able to guarantee that the data pointed to by `Ptr` will be treated as pinned all the way until its `drop` handler is complete! *For more information, see the [`pin` module docs][self]* If the underlying data is [`Unpin`], [`Pin::into_inner`] should be used instead.
3198core::pin::Pinmap_uncheckedfunctionThis function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning
3199core::pin::Pinmap_unchecked_mutfunctionThis function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning
3200core::pin::Pinnew_uncheckedfunctionThis constructor is unsafe because we cannot guarantee that the data pointed to by `pointer` is pinned. At its core, pinning a value means making the guarantee that the value's data will not be moved nor have its storage invalidated until it gets dropped. For a more thorough explanation of pinning, see the [`pin` module docs]. If the caller that is constructing this `Pin<Ptr>` does not ensure that the data `Ptr` points to is pinned, that is a violation of the API contract and may lead to undefined behavior in later (even safe) operations. By using this method, you are also making a promise about the [`Deref`], [`DerefMut`], and [`Drop`] implementations of `Ptr`, if they exist. Most importantly, they must not move out of their `self` arguments: `Pin::as_mut` and `Pin::as_ref` will call `DerefMut::deref_mut` and `Deref::deref` *on the pointer type `Ptr`* and expect these methods to uphold the pinning invariants. Moreover, by calling this method you promise that the reference `Ptr` dereferences to will not be moved out of again; in particular, it must not be possible to obtain a `&mut Ptr::Target` and then move out of that reference (using, for example [`mem::swap`]). For example, calling `Pin::new_unchecked` on an `&'a mut T` is unsafe because while you are able to pin it for the given lifetime `'a`, you have no control over whether it is kept pinned once `'a` ends, and therefore cannot uphold the guarantee that a value, once pinned, remains pinned until it is dropped: ``` use std::mem; use std::pin::Pin; fn move_pinned_ref<T>(mut a: T, mut b: T) { unsafe { let p: Pin<&mut T> = Pin::new_unchecked(&mut a); // This should mean the pointee `a` can never move again. } mem::swap(&mut a, &mut b); // Potential UB down the road ⚠️ // The address of `a` changed to `b`'s stack slot, so `a` got moved even // though we have previously pinned it! We have violated the pinning API contract. } ``` A value, once pinned, must remain pinned until it is dropped (unless its type implements `Unpin`). Because `Pin<&mut T>` does not own the value, dropping the `Pin` will not drop the value and will not end the pinning contract. So moving the value after dropping the `Pin<&mut T>` is still a violation of the API contract. Similarly, calling `Pin::new_unchecked` on an `Rc<T>` is unsafe because there could be aliases to the same data that are not subject to the pinning restrictions: ``` use std::rc::Rc; use std::pin::Pin; fn move_pinned_rc<T>(mut x: Rc<T>) { // This should mean the pointee can never move again. let pin = unsafe { Pin::new_unchecked(Rc::clone(&x)) }; { let p: Pin<&T> = pin.as_ref(); // ... } drop(pin); let content = Rc::get_mut(&mut x).unwrap(); // Potential UB down the road ⚠️ // Now, if `x` was the only reference, we have a mutable reference to // data that we pinned above, which we could use to move it as we have // seen in the previous example. We have violated the pinning API contract. } ```
3201core::pointeraddfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_add`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_add`]: #method.wrapping_add [allocation]: crate::ptr#allocation
3202core::pointeras_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3203core::pointeras_mut_uncheckedfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3204core::pointeras_reffunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3205core::pointeras_ref_uncheckedfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3206core::pointeras_uninit_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3207core::pointeras_uninit_reffunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
3208core::pointeras_uninit_slicefunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation
3209core::pointeras_uninit_slice_mutfunctionWhen calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation
3210core::pointerbyte_addfunction
3211core::pointerbyte_offsetfunction
3212core::pointerbyte_offset_fromfunction
3213core::pointerbyte_offset_from_unsignedfunction
3214core::pointerbyte_subfunction
3215core::pointercopy_fromfunction
3216core::pointercopy_from_nonoverlappingfunction
3217core::pointercopy_tofunction
3218core::pointercopy_to_nonoverlappingfunction
3219core::pointerdrop_in_placefunction
3220core::pointerget_uncheckedfunction
3221core::pointerget_unchecked_mutfunction
3222core::pointeroffsetfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Note that "range" here refers to a half-open range as usual in Rust, i.e., `self..result` for non-negative offsets and `result..self` for negative offsets. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_offset`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_offset`]: #method.wrapping_offset [allocation]: crate::ptr#allocation
3223core::pointeroffset_fromfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be [derived from][crate::ptr#provenance] a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation
3224core::pointeroffset_from_unsignedfunction- The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`.
3225core::pointerreadfunction
3226core::pointerread_unalignedfunction
3227core::pointerread_volatilefunction
3228core::pointerreplacefunction
3229core::pointersplit_at_mutfunction`mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. Since `len` being in-bounds is not a safety invariant of `*mut [T]` the safety requirements of this method are the same as for [`split_at_mut_unchecked`]. The explicit bounds check is only as useful as `len` is correct. [`split_at_mut_unchecked`]: #method.split_at_mut_unchecked [in-bounds]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3230core::pointersplit_at_mut_uncheckedfunction`mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. [in-bounds]: #method.add [out-of-bounds index]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3231core::pointersubfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_sub`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_sub`]: #method.wrapping_sub [allocation]: crate::ptr#allocation
3232core::pointerswapfunction
3233core::pointerwritefunction
3234core::pointerwrite_bytesfunction
3235core::pointerwrite_unalignedfunction
3236core::pointerwrite_volatilefunction
3237core::ptrcopyfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0, and `dst` must remain valid even when `src` is read for `count * size_of::<T>()` bytes. (This means if the memory ranges overlap, the `dst` pointer must not be invalidated by `src` reads.) * Both `src` and `dst` must be properly aligned. Like [`read`], `copy` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety
3238core::ptrcopy_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0. * Both `src` and `dst` must be properly aligned. * The region of memory beginning at `src` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `dst` with the same size. Like [`read`], `copy_nonoverlapping` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using *both* the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety
3239core::ptrdrop_in_placefunctionBehavior is undefined if any of the following conditions are violated: * `to_drop` must be [valid] for both reads and writes. * `to_drop` must be properly aligned, even if `T` has size 0. * `to_drop` must be nonnull, even if `T` has size 0. * The value `to_drop` points to must be valid for dropping, which may mean it must uphold additional invariants. These invariants depend on the type of the value being dropped. For instance, when dropping a Box, the box's pointer to the heap must be valid. * While `drop_in_place` is executing, the only way to access parts of `to_drop` is through the `&mut self` references supplied to the `Drop::drop` methods that `drop_in_place` invokes. Additionally, if `T` is not [`Copy`], using the pointed-to value after calling `drop_in_place` can cause undefined behavior. Note that `*to_drop = foo` counts as a use because it will cause the value to be dropped again. [`write()`] can be used to overwrite data without causing it to be dropped. [valid]: self#safety
3240core::ptrreadfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads or `T` must be a ZST. * `src` must be properly aligned. Use [`read_unaligned`] if this is not the case. * `src` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned.
3241core::ptrread_unalignedfunctionBehavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads. * `src` must point to a properly initialized value of type `T`. Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. [read-ownership]: read#ownership-of-the-returned-value [valid]: self#safety
3242core::ptrread_volatilefunctionLike [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. However, storing non-[`Copy`] types in volatile memory is almost certainly incorrect. Behavior is undefined if any of the following conditions are violated: * `src` must be either [valid] for reads, or `T` must be a ZST, or `src` must point to memory outside of all Rust allocations and reading from that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `src` must be properly aligned. * Reading from `src` must produce a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety [read-ownership]: read#ownership-of-the-returned-value
3243core::ptrreplacefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for both reads and writes or `T` must be a ZST. * `dst` must be properly aligned. * `dst` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
3244core::ptrswapfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the other pointer is written. (This means if the memory ranges overlap, the two pointers must not be subject to aliasing restrictions relative to each other.) * Both `x` and `y` must be properly aligned. Note that even if `T` has size `0`, the pointers must be properly aligned. [valid]: self#safety
3245core::ptrswap_nonoverlappingfunctionBehavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes of `count * size_of::<T>()` bytes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `y` with the same size. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [valid]: self#safety
3246core::ptrwritefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes or `T` must be a ZST. * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the case. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
3247core::ptrwrite_bytesfunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes. * `dst` must be properly aligned. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointer must be properly aligned. Additionally, note that changing `*dst` in this way can easily lead to undefined behavior (UB) later if the written bytes are not a valid representation of some `T`. For instance, the following is an **incorrect** use of this function: ```rust,no_run unsafe { let mut value: u8 = 0; let ptr: *mut bool = &mut value as *mut u8 as *mut bool; let _bool = ptr.read(); // This is fine, `ptr` points to a valid `bool`. ptr.write_bytes(42u8, 1); // This function itself does not cause UB... let _bool = ptr.read(); // ...but it makes this operation UB! ⚠️ } ``` [valid]: crate::ptr#safety
3248core::ptrwrite_unalignedfunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes. [valid]: self#safety
3249core::ptrwrite_volatilefunctionBehavior is undefined if any of the following conditions are violated: * `dst` must be either [valid] for writes, or `T` must be a ZST, or `dst` must point to memory outside of all Rust allocations and writing to that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `dst` must be properly aligned. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety
3250core::ptr::alignment::Alignmentnew_uncheckedfunction`align` must be a power of two. Equivalently, it must be `1 << exp` for some `exp` in `0..usize::BITS`. It must *not* be zero.
3251core::ptr::alignment::Alignmentof_val_rawfunctionThis function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Alignment::of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html
3252core::ptr::non_null::NonNulladdfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
3253core::ptr::non_null::NonNullas_mutfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3254core::ptr::non_null::NonNullas_reffunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion).
3255core::ptr::non_null::NonNullas_uninit_mutfunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
3256core::ptr::non_null::NonNullas_uninit_reffunctionWhen calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory.
3257core::ptr::non_null::NonNullas_uninit_slicefunctionWhen calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`]. [valid]: crate::ptr#safety
3258core::ptr::non_null::NonNullas_uninit_slice_mutfunctionWhen calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`]. [valid]: crate::ptr#safety
3259core::ptr::non_null::NonNullbyte_addfunction
3260core::ptr::non_null::NonNullbyte_offsetfunction
3261core::ptr::non_null::NonNullbyte_offset_fromfunction
3262core::ptr::non_null::NonNullbyte_offset_from_unsignedfunction
3263core::ptr::non_null::NonNullbyte_subfunction
3264core::ptr::non_null::NonNullcopy_fromfunction
3265core::ptr::non_null::NonNullcopy_from_nonoverlappingfunction
3266core::ptr::non_null::NonNullcopy_tofunction
3267core::ptr::non_null::NonNullcopy_to_nonoverlappingfunction
3268core::ptr::non_null::NonNulldrop_in_placefunction
3269core::ptr::non_null::NonNullget_unchecked_mutfunction
3270core::ptr::non_null::NonNullnew_uncheckedfunction`ptr` must be non-null.
3271core::ptr::non_null::NonNulloffsetfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
3272core::ptr::non_null::NonNulloffset_fromfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be *derived from* a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation
3273core::ptr::non_null::NonNulloffset_from_unsignedfunction- The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`.
3274core::ptr::non_null::NonNullreadfunction
3275core::ptr::non_null::NonNullread_unalignedfunction
3276core::ptr::non_null::NonNullread_volatilefunction
3277core::ptr::non_null::NonNullreplacefunction
3278core::ptr::non_null::NonNullsubfunctionIf any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation
3279core::ptr::non_null::NonNullswapfunction
3280core::ptr::non_null::NonNullwritefunction
3281core::ptr::non_null::NonNullwrite_bytesfunction
3282core::ptr::non_null::NonNullwrite_unalignedfunction
3283core::ptr::non_null::NonNullwrite_volatilefunction
3284core::result::Resultunwrap_err_uncheckedfunctionCalling this method on an [`Ok`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3285core::result::Resultunwrap_uncheckedfunctionCalling this method on an [`Err`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3286core::sliceGetDisjointMutIndextraitIf `is_in_bounds()` returns `true` and `is_overlapping()` returns `false`, it must be safe to index the slice with the indices.
3287core::slicealign_tofunctionThis method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
3288core::slicealign_to_mutfunctionThis method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here.
3289core::sliceas_ascii_uncheckedfunctionEvery byte in the slice must be in `0..=127`, or else this is UB.
3290core::sliceas_chunks_uncheckedfunctionThis may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`.
3291core::sliceas_chunks_unchecked_mutfunctionThis may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`.
3292core::sliceassume_init_dropfunctionIt is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behaviour.
3293core::sliceassume_init_mutfunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit` slice.
3294core::sliceassume_init_reffunctionCalling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state.
3295core::sliceget_disjoint_unchecked_mutfunctionCalling this method with overlapping or out-of-bounds indices is *[undefined behavior]* even if the resulting references are not used.
3296core::sliceget_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get(index).unwrap_unchecked()`. It's UB to call `.get_unchecked(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked(..len + 1)`, `.get_unchecked(..=len)`, or similar. [`get`]: slice::get [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3297core::sliceget_unchecked_mutfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get_mut(index).unwrap_unchecked()`. It's UB to call `.get_unchecked_mut(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked_mut(..len + 1)`, `.get_unchecked_mut(..=len)`, or similar. [`get_mut`]: slice::get_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3298core::slicesplit_at_mut_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at_mut`]: slice::split_at_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3299core::slicesplit_at_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at`]: slice::split_at [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3300core::sliceswap_uncheckedfunctionCalling this method with an out-of-bounds index is *[undefined behavior]*. The caller has to ensure that `a < self.len()` and `b < self.len()`.
3301core::slice::indexSliceIndextrait
3302core::slice::rawfrom_mut_ptr_rangefunctionBehavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_mut_ptr_range`] fulfills these requirements.
3303core::slice::rawfrom_ptr_rangefunctionBehavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_ptr_range`] fulfills these requirements.
3304core::slice::rawfrom_raw_partsfunctionBehavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for reads for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. See [below](#incorrect-usage) for an example incorrectly not taking this into account. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`].
3305core::slice::rawfrom_raw_parts_mutfunctionBehavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for both reads and writes for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. [valid]: ptr#safety [`NonNull::dangling()`]: ptr::NonNull::dangling
3306core::stras_ascii_uncheckedfunctionEvery character in this string must be ASCII, or else this is UB.
3307core::stras_bytes_mutfunctionThe caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior.
3308core::strfrom_utf8_uncheckedfunctionThe bytes passed in must be valid UTF-8.
3309core::strfrom_utf8_unchecked_mutfunction
3310core::strget_uncheckedfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
3311core::strget_unchecked_mutfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
3312core::strslice_mut_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
3313core::strslice_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
3314core::str::convertsfrom_raw_partsfunction
3315core::str::convertsfrom_raw_parts_mutfunction
3316core::str::convertsfrom_utf8_uncheckedfunctionThe bytes passed in must be valid UTF-8.
3317core::str::convertsfrom_utf8_unchecked_mutfunction
3318core::str::patternReverseSearchertrait
3319core::str::patternSearchertrait
3320core::str::validationsnext_code_pointfunction`bytes` must produce a valid UTF-8-like (UTF-8 or WTF-8) string
3321core::sync::atomicAtomicPrimitivetrait
3322core::sync::atomic::Atomicfrom_ptrfunction* `ptr` must be aligned to `align_of::<AtomicU8>()` (note that this is always true, since `align_of::<AtomicU8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicIsize>()` (note that on some platforms this can be bigger than `align_of::<isize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI32>()` (note that on some platforms this can be bigger than `align_of::<i32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU64>()` (note that on some platforms this can be bigger than `align_of::<u64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU16>()` (note that on some platforms this can be bigger than `align_of::<u16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this can be bigger than `align_of::<*mut T>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI8>()` (note that this is always true, since `align_of::<AtomicI8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI64>()` (note that on some platforms this can be bigger than `align_of::<i64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since `align_of::<AtomicBool>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicI16>()` (note that on some platforms this can be bigger than `align_of::<i16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicUsize>()` (note that on some platforms this can be bigger than `align_of::<usize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
* `ptr` must be aligned to `align_of::<AtomicU32>()` (note that on some platforms this can be bigger than `align_of::<u32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses
3323core::task::wake::LocalWakerfrom_rawfunction
3324core::task::wake::LocalWakernewfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld.
3325core::task::wake::Wakerfrom_rawfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWaker`]'s and [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html
3326core::task::wake::WakernewfunctionThe behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html
3327core::u128unchecked_addfunctionThis results in undefined behavior when `self + rhs > u128::MAX` or `self + rhs < u128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add
3328core::u128unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3329core::u128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3330core::u128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u128::MAX` or `self * rhs < u128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul
3331core::u128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl
3332core::u128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`.
3333core::u128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr
3334core::u128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`.
3335core::u128unchecked_subfunctionThis results in undefined behavior when `self - rhs > u128::MAX` or `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub
3336core::u16unchecked_addfunctionThis results in undefined behavior when `self + rhs > u16::MAX` or `self + rhs < u16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add
3337core::u16unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3338core::u16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3339core::u16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u16::MAX` or `self * rhs < u16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul
3340core::u16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl
3341core::u16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`.
3342core::u16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr
3343core::u16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`.
3344core::u16unchecked_subfunctionThis results in undefined behavior when `self - rhs > u16::MAX` or `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub
3345core::u32unchecked_addfunctionThis results in undefined behavior when `self + rhs > u32::MAX` or `self + rhs < u32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add
3346core::u32unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3347core::u32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3348core::u32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u32::MAX` or `self * rhs < u32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul
3349core::u32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl
3350core::u32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`.
3351core::u32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr
3352core::u32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`.
3353core::u32unchecked_subfunctionThis results in undefined behavior when `self - rhs > u32::MAX` or `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub
3354core::u64unchecked_addfunctionThis results in undefined behavior when `self + rhs > u64::MAX` or `self + rhs < u64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add
3355core::u64unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3356core::u64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3357core::u64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u64::MAX` or `self * rhs < u64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul
3358core::u64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl
3359core::u64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`.
3360core::u64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr
3361core::u64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`.
3362core::u64unchecked_subfunctionThis results in undefined behavior when `self - rhs > u64::MAX` or `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub
3363core::u8as_ascii_uncheckedfunctionThis byte must be valid ASCII, or else this is UB.
3364core::u8unchecked_addfunctionThis results in undefined behavior when `self + rhs > u8::MAX` or `self + rhs < u8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add
3365core::u8unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3366core::u8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3367core::u8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u8::MAX` or `self * rhs < u8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul
3368core::u8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl
3369core::u8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`.
3370core::u8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr
3371core::u8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`.
3372core::u8unchecked_subfunctionThis results in undefined behavior when `self - rhs > u8::MAX` or `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub
3373core::usizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > usize::MAX` or `self + rhs < usize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add
3374core::usizeunchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3375core::usizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3376core::usizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > usize::MAX` or `self * rhs < usize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul
3377core::usizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl
3378core::usizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`.
3379core::usizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr
3380core::usizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`.
3381core::usizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > usize::MAX` or `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub
3382std::charas_ascii_uncheckedfunctionThis char must be within the ASCII range, or else this is UB.
3383std::charfrom_u32_uncheckedfunctionThis function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32
3384std::collections::hash::map::HashMapget_disjoint_unchecked_mutfunctionCalling this method with overlapping keys is *[undefined behavior]* even if the resulting references are not used. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html
3385std::envremove_varfunctionThis function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To prevent a child process from inheriting an environment variable, you can instead use [`Command::env_remove`] or [`Command::env_clear`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env_remove`]: crate::process::Command::env_remove [`Command::env_clear`]: crate::process::Command::env_clear
3386std::envset_varfunctionThis function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla (for POSIX)](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To pass an environment variable to a child process, you can instead use [`Command::env`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env`]: crate::process::Command::env
3387std::f128to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
3388std::f16to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
3389std::f32to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
3390std::f64to_int_uncheckedfunctionThe value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part
3391std::ffi::os_str::OsStrfrom_encoded_bytes_uncheckedfunctionAs the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsStr` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring.
3392std::ffi::os_str::OsStringfrom_encoded_bytes_uncheckedfunctionAs the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsString` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring.
3393std::i128unchecked_addfunctionThis results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add
3394std::i128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3395std::i128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul
3396std::i128unchecked_negfunctionThis results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg
3397std::i128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl
3398std::i128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`.
3399std::i128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr
3400std::i128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`.
3401std::i128unchecked_subfunctionThis results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub
3402std::i16unchecked_addfunctionThis results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add
3403std::i16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3404std::i16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul
3405std::i16unchecked_negfunctionThis results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg
3406std::i16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl
3407std::i16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`.
3408std::i16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr
3409std::i16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`.
3410std::i16unchecked_subfunctionThis results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub
3411std::i32unchecked_addfunctionThis results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add
3412std::i32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3413std::i32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul
3414std::i32unchecked_negfunctionThis results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg
3415std::i32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl
3416std::i32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`.
3417std::i32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr
3418std::i32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`.
3419std::i32unchecked_subfunctionThis results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub
3420std::i64unchecked_addfunctionThis results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add
3421std::i64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3422std::i64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul
3423std::i64unchecked_negfunctionThis results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg
3424std::i64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl
3425std::i64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`.
3426std::i64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr
3427std::i64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`.
3428std::i64unchecked_subfunctionThis results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub
3429std::i8unchecked_addfunctionThis results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add
3430std::i8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3431std::i8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul
3432std::i8unchecked_negfunctionThis results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg
3433std::i8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl
3434std::i8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`.
3435std::i8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr
3436std::i8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`.
3437std::i8unchecked_subfunctionThis results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub
3438std::isizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add
3439std::isizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3440std::isizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul
3441std::isizeunchecked_negfunctionThis results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg
3442std::isizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl
3443std::isizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`.
3444std::isizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr
3445std::isizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`.
3446std::isizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub
3447std::os::fd::owned::BorrowedFdborrow_rawfunctionThe resource pointed to by `fd` must remain open for the duration of the returned `BorrowedFd`.
3448std::os::windows::io::handle::BorrowedHandleborrow_rawfunctionThe resource pointed to by `handle` must be a valid open handle, it must remain open for the duration of the returned `BorrowedHandle`. Note that it *may* have the value `INVALID_HANDLE_VALUE` (-1), which is sometimes a valid handle value. See [here] for the full story. And, it *may* have the value `NULL` (0), which can occur when consoles are detached from processes, or when `windows_subsystem` is used. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
3449std::os::windows::io::handle::HandleOrInvalidfrom_raw_handlefunctionThe passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be `INVALID_HANDLE_VALUE` (-1). Note that not all Windows APIs use `INVALID_HANDLE_VALUE` for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
3450std::os::windows::io::handle::HandleOrNullfrom_raw_handlefunctionThe passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be null. Note that not all Windows APIs use null for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443
3451std::os::windows::io::socket::BorrowedSocketborrow_rawfunctionThe resource pointed to by `socket` must remain open for the duration of the returned `BorrowedSocket`, and it must not have the value `INVALID_SOCKET`.
3452std::os::windows::process::ProcThreadAttributeListBuilderraw_attributefunctionThis function is marked as `unsafe` because it deals with raw pointers and sizes. It is the responsibility of the caller to ensure the value lives longer than the resulting [`ProcThreadAttributeList`] as well as the validity of the size parameter.
3453std::stras_ascii_uncheckedfunctionEvery character in this string must be ASCII, or else this is UB.
3454std::stras_bytes_mutfunctionThe caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior.
3455std::strfrom_utf8_uncheckedfunctionThe bytes passed in must be valid UTF-8.
3456std::strfrom_utf8_unchecked_mutfunction
3457std::strget_uncheckedfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
3458std::strget_unchecked_mutfunctionCallers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type.
3459std::strslice_mut_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
3460std::strslice_uncheckedfunctionCallers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries.
3461std::thread::builder::Builderspawn_uncheckedfunctionThe caller has to ensure that the spawned thread does not outlive any references in the supplied thread closure and its return type. This can be guaranteed in two ways: - ensure that [`join`][`JoinHandle::join`] is called before any referenced data is dropped - use only types with `'static` lifetime bounds, i.e., those with no or only `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`] and [`thread::spawn`] enforce this property statically)
3462std::thread::thread::Threadfrom_rawfunctionThis function is unsafe because improper use may lead to memory unsafety, even if the returned `Thread` is never accessed. Creating a `Thread` from a pointer other than one returned from [`Thread::into_raw`] is **undefined behavior**. Calling this function twice on the same raw pointer can lead to a double-free if both `Thread` instances are dropped.
3463std::u128unchecked_addfunctionThis results in undefined behavior when `self + rhs > u128::MAX` or `self + rhs < u128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add
3464std::u128unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3465std::u128unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3466std::u128unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u128::MAX` or `self * rhs < u128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul
3467std::u128unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl
3468std::u128unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`.
3469std::u128unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr
3470std::u128unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`.
3471std::u128unchecked_subfunctionThis results in undefined behavior when `self - rhs > u128::MAX` or `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub
3472std::u16unchecked_addfunctionThis results in undefined behavior when `self + rhs > u16::MAX` or `self + rhs < u16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add
3473std::u16unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3474std::u16unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3475std::u16unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u16::MAX` or `self * rhs < u16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul
3476std::u16unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl
3477std::u16unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`.
3478std::u16unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr
3479std::u16unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`.
3480std::u16unchecked_subfunctionThis results in undefined behavior when `self - rhs > u16::MAX` or `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub
3481std::u32unchecked_addfunctionThis results in undefined behavior when `self + rhs > u32::MAX` or `self + rhs < u32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add
3482std::u32unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3483std::u32unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3484std::u32unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u32::MAX` or `self * rhs < u32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul
3485std::u32unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl
3486std::u32unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`.
3487std::u32unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr
3488std::u32unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`.
3489std::u32unchecked_subfunctionThis results in undefined behavior when `self - rhs > u32::MAX` or `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub
3490std::u64unchecked_addfunctionThis results in undefined behavior when `self + rhs > u64::MAX` or `self + rhs < u64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add
3491std::u64unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3492std::u64unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3493std::u64unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u64::MAX` or `self * rhs < u64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul
3494std::u64unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl
3495std::u64unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`.
3496std::u64unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr
3497std::u64unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`.
3498std::u64unchecked_subfunctionThis results in undefined behavior when `self - rhs > u64::MAX` or `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub
3499std::u8as_ascii_uncheckedfunctionThis byte must be valid ASCII, or else this is UB.
3500std::u8unchecked_addfunctionThis results in undefined behavior when `self + rhs > u8::MAX` or `self + rhs < u8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add
3501std::u8unchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3502std::u8unchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3503std::u8unchecked_mulfunctionThis results in undefined behavior when `self * rhs > u8::MAX` or `self * rhs < u8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul
3504std::u8unchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl
3505std::u8unchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`.
3506std::u8unchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr
3507std::u8unchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`.
3508std::u8unchecked_subfunctionThis results in undefined behavior when `self - rhs > u8::MAX` or `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub
3509std::usizeunchecked_addfunctionThis results in undefined behavior when `self + rhs > usize::MAX` or `self + rhs < usize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add
3510std::usizeunchecked_disjoint_bitorfunctionRequires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`.
3511std::usizeunchecked_div_exactfunctionThis results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`.
3512std::usizeunchecked_mulfunctionThis results in undefined behavior when `self * rhs > usize::MAX` or `self * rhs < usize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul
3513std::usizeunchecked_shlfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl
3514std::usizeunchecked_shl_exactfunctionThis results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`.
3515std::usizeunchecked_shrfunctionThis results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr
3516std::usizeunchecked_shr_exactfunctionThis results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`.
3517std::usizeunchecked_subfunctionThis results in undefined behavior when `self - rhs > usize::MAX` or `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub