Generated from crates: core, alloc, std.
| Index | Module Path | API Name | Kind | Safety Doc | Mark |
|---|---|---|---|---|---|
| 1 | alloc::alloc | alloc | function | See [`GlobalAlloc::alloc`]. | |
| 2 | alloc::alloc | alloc_zeroed | function | See [`GlobalAlloc::alloc_zeroed`]. | |
| 3 | alloc::alloc | dealloc | function | See [`GlobalAlloc::dealloc`]. | |
| 4 | alloc::alloc | realloc | function | See [`GlobalAlloc::realloc`]. | |
| 5 | alloc::boxed::Box | assume_init | function | As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the values really are in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init | |
| 6 | alloc::boxed::Box | downcast_unchecked | function | The contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast | |
| 7 | alloc::boxed::Box | from_non_null | function | This function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same `NonNull` pointer. The non-null pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section. | |
| 8 | alloc::boxed::Box | from_non_null_in | function | This function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The non-null pointer must point to a block of memory allocated by `alloc`. | |
| 9 | alloc::boxed::Box | from_raw | function | This function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by the global allocator. The safety conditions are described in the [memory layout] section. | |
| 10 | alloc::boxed::Box | from_raw_in | function | This function is unsafe because improper use may lead to memory problems. For example, a double-free may occur if the function is called twice on the same raw pointer. The raw pointer must point to a block of memory allocated by `alloc`. | |
| 11 | alloc::collections::binary_heap::BinaryHeap | from_raw_vec | function | The supplied `vec` must be a max-heap, i.e. for all indices `0 < i < vec.len()`, `vec[(i - 1) / 2] >= vec[i]`. | |
| 12 | alloc::collections::btree::map::CursorMut | insert_after_unchecked | function | You must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order. | |
| 13 | alloc::collections::btree::map::CursorMut | insert_before_unchecked | function | You must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order. | |
| 14 | alloc::collections::btree::map::CursorMut | with_mutable_key | function | Since this cursor allows mutating keys, you must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order. | |
| 15 | alloc::collections::btree::map::CursorMutKey | insert_after_unchecked | function | You must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order. | |
| 16 | alloc::collections::btree::map::CursorMutKey | insert_before_unchecked | function | You must ensure that the `BTreeMap` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All keys in the tree must remain in sorted order. | |
| 17 | alloc::collections::btree::set::CursorMut | insert_after_unchecked | function | You must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order. | |
| 18 | alloc::collections::btree::set::CursorMut | insert_before_unchecked | function | You must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order. | |
| 19 | alloc::collections::btree::set::CursorMut | with_mutable_key | function | Since this cursor allows mutating elements, you must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order. | |
| 20 | alloc::collections::btree::set::CursorMutKey | insert_after_unchecked | function | You must ensure that the `BTreeSet` invariants are maintained. Specifically: * The key of the newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order. | |
| 21 | alloc::collections::btree::set::CursorMutKey | insert_before_unchecked | function | You must ensure that the `BTreeSet` invariants are maintained. Specifically: * The newly inserted element must be unique in the tree. * All elements in the tree must remain in sorted order. | |
| 22 | alloc::ffi::c_str::CString | from_raw | function | This should only ever be called with a pointer that was earlier obtained by calling [`CString::into_raw`], and the memory it points to must not be accessed through any other pointer during the lifetime of reconstructed `CString`. Other usage (e.g., trying to take ownership of a string that was allocated by foreign code) is likely to lead to undefined behavior or allocator corruption. This function does not validate ownership of the raw pointer's memory. A double-free may occur if the function is called twice on the same raw pointer. Additionally, the caller must ensure the pointer is not dangling. It should be noted that the length isn't just "recomputed," but that the recomputed length must match the original length from the [`CString::into_raw`] call. This means the [`CString::into_raw`]/`from_raw` methods should not be used when passing the string to C functions that can modify the string's length. > **Note:** If you need to borrow a string that was allocated by > foreign code, use [`CStr`]. If you need to take ownership of > a string that was allocated by foreign code, you will need to > make your own provisions for freeing it appropriately, likely > with the foreign code's API to do that. | |
| 23 | alloc::ffi::c_str::CString | from_vec_unchecked | function | ||
| 24 | alloc::ffi::c_str::CString | from_vec_with_nul_unchecked | function | The given [`Vec`] **must** have one nul byte as its last element. This means it cannot be empty nor have any other nul byte anywhere else. | |
| 25 | alloc::rc::Rc | assume_init | function | As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init | |
| 26 | alloc::rc::Rc | decrement_strong_count | function | The pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in | |
| 27 | alloc::rc::Rc | decrement_strong_count_in | function | The pointer must have been obtained through `Rc::into_raw`and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Rc` and backing storage, but **should not** be called after the final `Rc` has been released. [from_raw_in]: Rc::from_raw_in | |
| 28 | alloc::rc::Rc | downcast_unchecked | function | The contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast | |
| 29 | alloc::rc::Rc | from_raw | function | ||
| 30 | alloc::rc::Rc | from_raw_in | function | ||
| 31 | alloc::rc::Rc | get_mut_unchecked | function | If any other `Rc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Rc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Rc::new`. | |
| 32 | alloc::rc::Rc | increment_strong_count | function | The pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Rc::from_raw_in | |
| 33 | alloc::rc::Rc | increment_strong_count_in | function | The pointer must have been obtained through `Rc::into_raw` and must satisfy the same layout requirements specified in [`Rc::from_raw_in`][from_raw_in]. The associated `Rc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Rc::from_raw_in | |
| 34 | alloc::rc::Weak | from_raw | function | The pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by the global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`]. | |
| 35 | alloc::rc::Weak | from_raw_in | function | The pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and `ptr` must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`]. | |
| 36 | alloc::str | from_boxed_utf8_unchecked | function | * The provided bytes must contain a valid UTF-8 sequence. | |
| 37 | alloc::string::String | as_mut_vec | function | This function is unsafe because the returned `&mut Vec` allows writing bytes which are not valid UTF-8. If this constraint is violated, using the original `String` after dropping the `&mut Vec` may violate memory safety, as the rest of the standard library assumes that `String`s are valid UTF-8. | |
| 38 | alloc::string::String | from_raw_parts | function | This is highly unsafe, due to the number of invariants that aren't checked: * all safety requirements for [`Vec::<u8>::from_raw_parts`]. * all safety requirements for [`String::from_utf8_unchecked`]. Violating these may cause problems like corrupting the allocator's internal data structures. For example, it is normally **not** safe to build a `String` from a pointer to a C `char` array containing UTF-8 _unless_ you are certain that array was originally allocated by the Rust standard library's allocator. The ownership of `buf` is effectively transferred to the `String` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. | |
| 39 | alloc::string::String | from_utf8_unchecked | function | This function is unsafe because it does not check that the bytes passed to it are valid UTF-8. If this constraint is violated, it may cause memory unsafety issues with future users of the `String`, as the rest of the standard library assumes that `String`s are valid UTF-8. | |
| 40 | alloc::sync::Arc | assume_init | function | As with [`MaybeUninit::assume_init`], it is up to the caller to guarantee that the inner value really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. [`MaybeUninit::assume_init`]: mem::MaybeUninit::assume_init | |
| 41 | alloc::sync::Arc | decrement_strong_count | function | The pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by the global allocator. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in | |
| 42 | alloc::sync::Arc | decrement_strong_count_in | function | The pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) when invoking this method, and `ptr` must point to a block of memory allocated by `alloc`. This method can be used to release the final `Arc` and backing storage, but **should not** be called after the final `Arc` has been released. [from_raw_in]: Arc::from_raw_in | |
| 43 | alloc::sync::Arc | downcast_unchecked | function | The contained value must be of type `T`. Calling this method with the incorrect type is *undefined behavior*. [`downcast`]: Self::downcast | |
| 44 | alloc::sync::Arc | from_raw | function | ||
| 45 | alloc::sync::Arc | from_raw_in | function | ||
| 46 | alloc::sync::Arc | get_mut_unchecked | function | If any other `Arc` or [`Weak`] pointers to the same allocation exist, then they must not be dereferenced or have active borrows for the duration of the returned borrow, and their inner type must be exactly the same as the inner type of this Arc (including lifetimes). This is trivially the case if no such pointers exist, for example immediately after `Arc::new`. | |
| 47 | alloc::sync::Arc | increment_strong_count | function | The pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by the global allocator. [from_raw_in]: Arc::from_raw_in | |
| 48 | alloc::sync::Arc | increment_strong_count_in | function | The pointer must have been obtained through `Arc::into_raw` and must satisfy the same layout requirements specified in [`Arc::from_raw_in`][from_raw_in]. The associated `Arc` instance must be valid (i.e. the strong count must be at least 1) for the duration of this method, and `ptr` must point to a block of memory allocated by `alloc`. [from_raw_in]: Arc::from_raw_in | |
| 49 | alloc::sync::Weak | from_raw | function | The pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by global allocator. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`]. | |
| 50 | alloc::sync::Weak | from_raw_in | function | The pointer must have originated from the [`into_raw`] and must still own its potential weak reference, and must point to a block of memory allocated by `alloc`. It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this takes ownership of one weak reference currently represented as a raw pointer (the weak count is not modified by this operation) and therefore it must be paired with a previous call to [`into_raw`]. | |
| 51 | alloc::vec::Vec | from_parts | function | This is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`NonNull::slice_from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc | |
| 52 | alloc::vec::Vec | from_parts_in | function | This is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting | |
| 53 | alloc::vec::Vec | from_raw_parts | function | This is highly unsafe, due to the number of invariants that aren't checked: * If `T` is not a zero-sized type and the capacity is nonzero, `ptr` must have been allocated using the global allocator, such as via the [`alloc::alloc`] function. If `T` is a zero-sized type or the capacity is zero, `ptr` need only be non-null and aligned. * `T` needs to have the same alignment as what `ptr` was allocated with, if the pointer is required to be allocated. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes), if nonzero, needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to be the capacity that the pointer was allocated with, if the pointer is required to be allocated. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is normally **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`, doing so is only safe if the array was initially allocated by a `Vec` or `String`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. To avoid these issues, it is often preferable to do casting/transmuting using [`slice::from_raw_parts`] instead. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`alloc::alloc`]: crate::alloc::alloc [`dealloc`]: crate::alloc::GlobalAlloc::dealloc | |
| 54 | alloc::vec::Vec | from_raw_parts_in | function | This is highly unsafe, due to the number of invariants that aren't checked: * `ptr` must be [*currently allocated*] via the given allocator `alloc`. * `T` needs to have the same alignment as what `ptr` was allocated with. (`T` having a less strict alignment is not sufficient, the alignment really needs to be equal to satisfy the [`dealloc`] requirement that memory must be allocated and deallocated with the same layout.) * The size of `T` times the `capacity` (i.e. the allocated size in bytes) needs to be the same size as the pointer was allocated with. (Because similar to alignment, [`dealloc`] must be called with the same layout `size`.) * `length` needs to be less than or equal to `capacity`. * The first `length` values must be properly initialized values of type `T`. * `capacity` needs to [*fit*] the layout size that the pointer was allocated with. * The allocated size in bytes must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. These requirements are always upheld by any `ptr` that has been allocated via `Vec<T, A>`. Other allocation sources are allowed if the invariants are upheld. Violating these may cause problems like corrupting the allocator's internal data structures. For example it is **not** safe to build a `Vec<u8>` from a pointer to a C `char` array with length `size_t`. It's also not safe to build one from a `Vec<u16>` and its length, because the allocator cares about the alignment, and these two types have different alignments. The buffer was allocated with alignment 2 (for `u16`), but after turning it into a `Vec<u8>` it'll be deallocated with alignment 1. The ownership of `ptr` is effectively transferred to the `Vec<T>` which may then deallocate, reallocate or change the contents of memory pointed to by the pointer at will. Ensure that nothing else uses the pointer after calling this function. [`String`]: crate::string::String [`dealloc`]: crate::alloc::GlobalAlloc::dealloc [*currently allocated*]: crate::alloc::Allocator#currently-allocated-memory [*fit*]: crate::alloc::Allocator#memory-fitting | |
| 55 | alloc::vec::Vec | set_len | function | - `new_len` must be less than or equal to [`capacity()`]. - The elements at `old_len..new_len` must be initialized. [`capacity()`]: Vec::capacity | |
| 56 | core::alloc | Allocator | trait | Memory blocks that are [*currently allocated*] by an allocator, must point to valid memory, and retain their validity until either: - the memory block is deallocated, or - the allocator is dropped. Copying, cloning, or moving the allocator must not invalidate memory blocks returned from it. A copied or cloned allocator must behave like the original allocator. A memory block which is [*currently allocated*] may be passed to any method of the allocator that accepts such an argument. [*currently allocated*]: #currently-allocated-memory | |
| 57 | core::alloc::global | GlobalAlloc | trait | The `GlobalAlloc` trait is an `unsafe` trait for a number of reasons, and implementors must ensure that they adhere to these contracts: * It's undefined behavior if global allocators unwind. This restriction may be lifted in the future, but currently a panic from any of these functions may lead to memory unsafety. * `Layout` queries and calculations in general must be correct. Callers of this trait are allowed to rely on the contracts defined on each method, and implementors must ensure such contracts remain true. * You must not rely on allocations actually happening, even if there are explicit heap allocations in the source. The optimizer may detect unused allocations that it can either eliminate entirely or move to the stack and thus never invoke the allocator. The optimizer may further assume that allocation is infallible, so code that used to fail due to allocator failures may now suddenly work because the optimizer worked around the need for an allocation. More concretely, the following code example is unsound, irrespective of whether your custom allocator allows counting how many allocations have happened. ```rust,ignore (unsound and has placeholders) drop(Box::new(42)); let number_of_heap_allocs = /* call private allocator API */; unsafe { std::hint::assert_unchecked(number_of_heap_allocs > 0); } ``` Note that the optimizations mentioned above are not the only optimization that can be applied. You may generally not rely on heap allocations happening if they can be removed without changing program behavior. Whether allocations happen or not is not part of the program behavior, even if it could be detected via an allocator that tracks allocations by printing or otherwise having side effects. | |
| 58 | core::alloc::layout::Layout | for_value_raw | function | This function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable for the type `T` acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Layout::for_value`] on a reference to an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html | |
| 59 | core::alloc::layout::Layout | from_size_align_unchecked | function | This function is unsafe as it does not verify the preconditions from [`Layout::from_size_align`]. | |
| 60 | core::alloc::layout::Layout | from_size_alignment_unchecked | function | This function is unsafe as it does not verify the preconditions from [`Layout::from_size_alignment`]. | |
| 61 | core::array | as_ascii_unchecked | function | Every byte in the array must be in `0..=127`, or else this is UB. | |
| 62 | core::array::iter::IntoIter | new_unchecked | function | - The `buffer[initialized]` elements must all be initialized. - The range must be canonical, with `initialized.start <= initialized.end`. - The range must be in-bounds for the buffer, with `initialized.end <= N`. (Like how indexing `[0][100..100]` fails despite the range being empty.) It's sound to have more elements initialized than mentioned, though that will most likely result in them being leaked. | |
| 63 | core::ascii::ascii_char::AsciiChar | digit_unchecked | function | This is immediate UB if called with `d > 64`. If `d >= 10` and `d <= 64`, this is allowed to return any value or panic. Notably, it should not be expected to return hex digits, or any other reasonable extension of the decimal digits. (This loose safety condition is intended to simplify soundness proofs when writing code using this method, since the implementation doesn't need something really specific, not to make those other arguments do something useful. It might be tightened before stabilization.) | |
| 64 | core::ascii::ascii_char::AsciiChar | from_u8_unchecked | function | `b` must be in `0..=127`, or else this is UB. | |
| 65 | core::cell | CloneFromCell | trait | Implementing this trait for a type is sound if and only if the following code is sound for T = that type. ``` #![feature(cell_get_cloned)] | |
| 66 | core::cell::RefCell | try_borrow_unguarded | function | Unlike `RefCell::borrow`, this method is unsafe because it does not return a `Ref`, thus leaving the borrow flag untouched. Mutably borrowing the `RefCell` while the reference returned by this method is alive is undefined behavior. | |
| 67 | core::cell::UnsafeCell | as_mut_unchecked | function | - It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive. - Mutating the wrapped value through other means while the returned reference is alive is Undefined Behavior. | |
| 68 | core::cell::UnsafeCell | as_ref_unchecked | function | - It is Undefined Behavior to call this while any mutable reference to the wrapped value is alive. - Mutating the wrapped value while the returned reference is alive is Undefined Behavior. | |
| 69 | core::cell::UnsafeCell | replace | function | The caller must take care to avoid aliasing and data races. - It is Undefined Behavior to allow calls to race with any other access to the wrapped value. - It is Undefined Behavior to call this while any other reference(s) to the wrapped value are alive. | |
| 70 | core::char | as_ascii_unchecked | function | This char must be within the ASCII range, or else this is UB. | |
| 71 | core::char | from_u32_unchecked | function | This function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32 | |
| 72 | core::clone | CloneToUninit | trait | Implementations must ensure that when `.clone_to_uninit(dest)` returns normally rather than panicking, it always leaves `*dest` initialized as a valid value of type `Self`. | |
| 73 | core::clone | TrivialClone | trait | `Clone::clone` must be equivalent to copying the value, otherwise calling functions such as `slice::clone_from_slice` can have undefined behaviour. | |
| 74 | core::core_arch::aarch64::mte | __arm_mte_create_random_tag | function | ||
| 75 | core::core_arch::aarch64::mte | __arm_mte_exclude_tag | function | ||
| 76 | core::core_arch::aarch64::mte | __arm_mte_get_tag | function | ||
| 77 | core::core_arch::aarch64::mte | __arm_mte_increment_tag | function | ||
| 78 | core::core_arch::aarch64::mte | __arm_mte_ptrdiff | function | ||
| 79 | core::core_arch::aarch64::mte | __arm_mte_set_tag | function | ||
| 80 | core::core_arch::aarch64::neon | vld1_dup_f64 | function | ||
| 81 | core::core_arch::aarch64::neon | vld1_lane_f64 | function | ||
| 82 | core::core_arch::aarch64::neon | vld1q_dup_f64 | function | ||
| 83 | core::core_arch::aarch64::neon | vld1q_lane_f64 | function | ||
| 84 | core::core_arch::aarch64::neon::generated | vld1_f16 | function | * Neon intrinsic unsafe | |
| 85 | core::core_arch::aarch64::neon::generated | vld1_f32 | function | * Neon intrinsic unsafe | |
| 86 | core::core_arch::aarch64::neon::generated | vld1_f64 | function | * Neon intrinsic unsafe | |
| 87 | core::core_arch::aarch64::neon::generated | vld1_f64_x2 | function | * Neon intrinsic unsafe | |
| 88 | core::core_arch::aarch64::neon::generated | vld1_f64_x3 | function | * Neon intrinsic unsafe | |
| 89 | core::core_arch::aarch64::neon::generated | vld1_f64_x4 | function | * Neon intrinsic unsafe | |
| 90 | core::core_arch::aarch64::neon::generated | vld1_p16 | function | * Neon intrinsic unsafe | |
| 91 | core::core_arch::aarch64::neon::generated | vld1_p64 | function | * Neon intrinsic unsafe | |
| 92 | core::core_arch::aarch64::neon::generated | vld1_p8 | function | * Neon intrinsic unsafe | |
| 93 | core::core_arch::aarch64::neon::generated | vld1_s16 | function | * Neon intrinsic unsafe | |
| 94 | core::core_arch::aarch64::neon::generated | vld1_s32 | function | * Neon intrinsic unsafe | |
| 95 | core::core_arch::aarch64::neon::generated | vld1_s64 | function | * Neon intrinsic unsafe | |
| 96 | core::core_arch::aarch64::neon::generated | vld1_s8 | function | * Neon intrinsic unsafe | |
| 97 | core::core_arch::aarch64::neon::generated | vld1_u16 | function | * Neon intrinsic unsafe | |
| 98 | core::core_arch::aarch64::neon::generated | vld1_u32 | function | * Neon intrinsic unsafe | |
| 99 | core::core_arch::aarch64::neon::generated | vld1_u64 | function | * Neon intrinsic unsafe | |
| 100 | core::core_arch::aarch64::neon::generated | vld1_u8 | function | * Neon intrinsic unsafe | |
| 101 | core::core_arch::aarch64::neon::generated | vld1q_f16 | function | * Neon intrinsic unsafe | |
| 102 | core::core_arch::aarch64::neon::generated | vld1q_f32 | function | * Neon intrinsic unsafe | |
| 103 | core::core_arch::aarch64::neon::generated | vld1q_f64 | function | * Neon intrinsic unsafe | |
| 104 | core::core_arch::aarch64::neon::generated | vld1q_f64_x2 | function | * Neon intrinsic unsafe | |
| 105 | core::core_arch::aarch64::neon::generated | vld1q_f64_x3 | function | * Neon intrinsic unsafe | |
| 106 | core::core_arch::aarch64::neon::generated | vld1q_f64_x4 | function | * Neon intrinsic unsafe | |
| 107 | core::core_arch::aarch64::neon::generated | vld1q_p16 | function | * Neon intrinsic unsafe | |
| 108 | core::core_arch::aarch64::neon::generated | vld1q_p64 | function | * Neon intrinsic unsafe | |
| 109 | core::core_arch::aarch64::neon::generated | vld1q_p8 | function | * Neon intrinsic unsafe | |
| 110 | core::core_arch::aarch64::neon::generated | vld1q_s16 | function | * Neon intrinsic unsafe | |
| 111 | core::core_arch::aarch64::neon::generated | vld1q_s32 | function | * Neon intrinsic unsafe | |
| 112 | core::core_arch::aarch64::neon::generated | vld1q_s64 | function | * Neon intrinsic unsafe | |
| 113 | core::core_arch::aarch64::neon::generated | vld1q_s8 | function | * Neon intrinsic unsafe | |
| 114 | core::core_arch::aarch64::neon::generated | vld1q_u16 | function | * Neon intrinsic unsafe | |
| 115 | core::core_arch::aarch64::neon::generated | vld1q_u32 | function | * Neon intrinsic unsafe | |
| 116 | core::core_arch::aarch64::neon::generated | vld1q_u64 | function | * Neon intrinsic unsafe | |
| 117 | core::core_arch::aarch64::neon::generated | vld1q_u8 | function | * Neon intrinsic unsafe | |
| 118 | core::core_arch::aarch64::neon::generated | vld2_dup_f64 | function | * Neon intrinsic unsafe | |
| 119 | core::core_arch::aarch64::neon::generated | vld2_f64 | function | * Neon intrinsic unsafe | |
| 120 | core::core_arch::aarch64::neon::generated | vld2_lane_f64 | function | * Neon intrinsic unsafe | |
| 121 | core::core_arch::aarch64::neon::generated | vld2_lane_p64 | function | * Neon intrinsic unsafe | |
| 122 | core::core_arch::aarch64::neon::generated | vld2_lane_s64 | function | * Neon intrinsic unsafe | |
| 123 | core::core_arch::aarch64::neon::generated | vld2_lane_u64 | function | * Neon intrinsic unsafe | |
| 124 | core::core_arch::aarch64::neon::generated | vld2q_dup_f64 | function | * Neon intrinsic unsafe | |
| 125 | core::core_arch::aarch64::neon::generated | vld2q_dup_p64 | function | * Neon intrinsic unsafe | |
| 126 | core::core_arch::aarch64::neon::generated | vld2q_dup_s64 | function | * Neon intrinsic unsafe | |
| 127 | core::core_arch::aarch64::neon::generated | vld2q_dup_u64 | function | * Neon intrinsic unsafe | |
| 128 | core::core_arch::aarch64::neon::generated | vld2q_f64 | function | * Neon intrinsic unsafe | |
| 129 | core::core_arch::aarch64::neon::generated | vld2q_lane_f64 | function | * Neon intrinsic unsafe | |
| 130 | core::core_arch::aarch64::neon::generated | vld2q_lane_p64 | function | * Neon intrinsic unsafe | |
| 131 | core::core_arch::aarch64::neon::generated | vld2q_lane_p8 | function | * Neon intrinsic unsafe | |
| 132 | core::core_arch::aarch64::neon::generated | vld2q_lane_s64 | function | * Neon intrinsic unsafe | |
| 133 | core::core_arch::aarch64::neon::generated | vld2q_lane_s8 | function | * Neon intrinsic unsafe | |
| 134 | core::core_arch::aarch64::neon::generated | vld2q_lane_u64 | function | * Neon intrinsic unsafe | |
| 135 | core::core_arch::aarch64::neon::generated | vld2q_lane_u8 | function | * Neon intrinsic unsafe | |
| 136 | core::core_arch::aarch64::neon::generated | vld2q_p64 | function | * Neon intrinsic unsafe | |
| 137 | core::core_arch::aarch64::neon::generated | vld2q_s64 | function | * Neon intrinsic unsafe | |
| 138 | core::core_arch::aarch64::neon::generated | vld2q_u64 | function | * Neon intrinsic unsafe | |
| 139 | core::core_arch::aarch64::neon::generated | vld3_dup_f64 | function | * Neon intrinsic unsafe | |
| 140 | core::core_arch::aarch64::neon::generated | vld3_f64 | function | * Neon intrinsic unsafe | |
| 141 | core::core_arch::aarch64::neon::generated | vld3_lane_f64 | function | * Neon intrinsic unsafe | |
| 142 | core::core_arch::aarch64::neon::generated | vld3_lane_p64 | function | * Neon intrinsic unsafe | |
| 143 | core::core_arch::aarch64::neon::generated | vld3_lane_s64 | function | * Neon intrinsic unsafe | |
| 144 | core::core_arch::aarch64::neon::generated | vld3_lane_u64 | function | * Neon intrinsic unsafe | |
| 145 | core::core_arch::aarch64::neon::generated | vld3q_dup_f64 | function | * Neon intrinsic unsafe | |
| 146 | core::core_arch::aarch64::neon::generated | vld3q_dup_p64 | function | * Neon intrinsic unsafe | |
| 147 | core::core_arch::aarch64::neon::generated | vld3q_dup_s64 | function | * Neon intrinsic unsafe | |
| 148 | core::core_arch::aarch64::neon::generated | vld3q_dup_u64 | function | * Neon intrinsic unsafe | |
| 149 | core::core_arch::aarch64::neon::generated | vld3q_f64 | function | * Neon intrinsic unsafe | |
| 150 | core::core_arch::aarch64::neon::generated | vld3q_lane_f64 | function | * Neon intrinsic unsafe | |
| 151 | core::core_arch::aarch64::neon::generated | vld3q_lane_p64 | function | * Neon intrinsic unsafe | |
| 152 | core::core_arch::aarch64::neon::generated | vld3q_lane_p8 | function | * Neon intrinsic unsafe | |
| 153 | core::core_arch::aarch64::neon::generated | vld3q_lane_s64 | function | * Neon intrinsic unsafe | |
| 154 | core::core_arch::aarch64::neon::generated | vld3q_lane_s8 | function | * Neon intrinsic unsafe | |
| 155 | core::core_arch::aarch64::neon::generated | vld3q_lane_u64 | function | * Neon intrinsic unsafe | |
| 156 | core::core_arch::aarch64::neon::generated | vld3q_lane_u8 | function | * Neon intrinsic unsafe | |
| 157 | core::core_arch::aarch64::neon::generated | vld3q_p64 | function | * Neon intrinsic unsafe | |
| 158 | core::core_arch::aarch64::neon::generated | vld3q_s64 | function | * Neon intrinsic unsafe | |
| 159 | core::core_arch::aarch64::neon::generated | vld3q_u64 | function | * Neon intrinsic unsafe | |
| 160 | core::core_arch::aarch64::neon::generated | vld4_dup_f64 | function | * Neon intrinsic unsafe | |
| 161 | core::core_arch::aarch64::neon::generated | vld4_f64 | function | * Neon intrinsic unsafe | |
| 162 | core::core_arch::aarch64::neon::generated | vld4_lane_f64 | function | * Neon intrinsic unsafe | |
| 163 | core::core_arch::aarch64::neon::generated | vld4_lane_p64 | function | * Neon intrinsic unsafe | |
| 164 | core::core_arch::aarch64::neon::generated | vld4_lane_s64 | function | * Neon intrinsic unsafe | |
| 165 | core::core_arch::aarch64::neon::generated | vld4_lane_u64 | function | * Neon intrinsic unsafe | |
| 166 | core::core_arch::aarch64::neon::generated | vld4q_dup_f64 | function | * Neon intrinsic unsafe | |
| 167 | core::core_arch::aarch64::neon::generated | vld4q_dup_p64 | function | * Neon intrinsic unsafe | |
| 168 | core::core_arch::aarch64::neon::generated | vld4q_dup_s64 | function | * Neon intrinsic unsafe | |
| 169 | core::core_arch::aarch64::neon::generated | vld4q_dup_u64 | function | * Neon intrinsic unsafe | |
| 170 | core::core_arch::aarch64::neon::generated | vld4q_f64 | function | * Neon intrinsic unsafe | |
| 171 | core::core_arch::aarch64::neon::generated | vld4q_lane_f64 | function | * Neon intrinsic unsafe | |
| 172 | core::core_arch::aarch64::neon::generated | vld4q_lane_p64 | function | * Neon intrinsic unsafe | |
| 173 | core::core_arch::aarch64::neon::generated | vld4q_lane_p8 | function | * Neon intrinsic unsafe | |
| 174 | core::core_arch::aarch64::neon::generated | vld4q_lane_s64 | function | * Neon intrinsic unsafe | |
| 175 | core::core_arch::aarch64::neon::generated | vld4q_lane_s8 | function | * Neon intrinsic unsafe | |
| 176 | core::core_arch::aarch64::neon::generated | vld4q_lane_u64 | function | * Neon intrinsic unsafe | |
| 177 | core::core_arch::aarch64::neon::generated | vld4q_lane_u8 | function | * Neon intrinsic unsafe | |
| 178 | core::core_arch::aarch64::neon::generated | vld4q_p64 | function | * Neon intrinsic unsafe | |
| 179 | core::core_arch::aarch64::neon::generated | vld4q_s64 | function | * Neon intrinsic unsafe | |
| 180 | core::core_arch::aarch64::neon::generated | vld4q_u64 | function | * Neon intrinsic unsafe | |
| 181 | core::core_arch::aarch64::neon::generated | vldap1_lane_p64 | function | * Neon intrinsic unsafe | |
| 182 | core::core_arch::aarch64::neon::generated | vldap1_lane_s64 | function | * Neon intrinsic unsafe | |
| 183 | core::core_arch::aarch64::neon::generated | vldap1_lane_u64 | function | * Neon intrinsic unsafe | |
| 184 | core::core_arch::aarch64::neon::generated | vldap1q_lane_f64 | function | * Neon intrinsic unsafe | |
| 185 | core::core_arch::aarch64::neon::generated | vldap1q_lane_p64 | function | * Neon intrinsic unsafe | |
| 186 | core::core_arch::aarch64::neon::generated | vldap1q_lane_s64 | function | * Neon intrinsic unsafe | |
| 187 | core::core_arch::aarch64::neon::generated | vldap1q_lane_u64 | function | * Neon intrinsic unsafe | |
| 188 | core::core_arch::aarch64::neon::generated | vluti2_lane_f16 | function | * Neon intrinsic unsafe | |
| 189 | core::core_arch::aarch64::neon::generated | vluti2_lane_p16 | function | * Neon intrinsic unsafe | |
| 190 | core::core_arch::aarch64::neon::generated | vluti2_lane_p8 | function | * Neon intrinsic unsafe | |
| 191 | core::core_arch::aarch64::neon::generated | vluti2_lane_s16 | function | * Neon intrinsic unsafe | |
| 192 | core::core_arch::aarch64::neon::generated | vluti2_lane_s8 | function | * Neon intrinsic unsafe | |
| 193 | core::core_arch::aarch64::neon::generated | vluti2_lane_u16 | function | * Neon intrinsic unsafe | |
| 194 | core::core_arch::aarch64::neon::generated | vluti2_lane_u8 | function | * Neon intrinsic unsafe | |
| 195 | core::core_arch::aarch64::neon::generated | vluti2_laneq_f16 | function | * Neon intrinsic unsafe | |
| 196 | core::core_arch::aarch64::neon::generated | vluti2_laneq_p16 | function | * Neon intrinsic unsafe | |
| 197 | core::core_arch::aarch64::neon::generated | vluti2_laneq_p8 | function | * Neon intrinsic unsafe | |
| 198 | core::core_arch::aarch64::neon::generated | vluti2_laneq_s16 | function | * Neon intrinsic unsafe | |
| 199 | core::core_arch::aarch64::neon::generated | vluti2_laneq_s8 | function | * Neon intrinsic unsafe | |
| 200 | core::core_arch::aarch64::neon::generated | vluti2_laneq_u16 | function | * Neon intrinsic unsafe | |
| 201 | core::core_arch::aarch64::neon::generated | vluti2_laneq_u8 | function | * Neon intrinsic unsafe | |
| 202 | core::core_arch::aarch64::neon::generated | vluti2q_lane_f16 | function | * Neon intrinsic unsafe | |
| 203 | core::core_arch::aarch64::neon::generated | vluti2q_lane_p16 | function | * Neon intrinsic unsafe | |
| 204 | core::core_arch::aarch64::neon::generated | vluti2q_lane_p8 | function | * Neon intrinsic unsafe | |
| 205 | core::core_arch::aarch64::neon::generated | vluti2q_lane_s16 | function | * Neon intrinsic unsafe | |
| 206 | core::core_arch::aarch64::neon::generated | vluti2q_lane_s8 | function | * Neon intrinsic unsafe | |
| 207 | core::core_arch::aarch64::neon::generated | vluti2q_lane_u16 | function | * Neon intrinsic unsafe | |
| 208 | core::core_arch::aarch64::neon::generated | vluti2q_lane_u8 | function | * Neon intrinsic unsafe | |
| 209 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_f16 | function | * Neon intrinsic unsafe | |
| 210 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_p16 | function | * Neon intrinsic unsafe | |
| 211 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_p8 | function | * Neon intrinsic unsafe | |
| 212 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_s16 | function | * Neon intrinsic unsafe | |
| 213 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_s8 | function | * Neon intrinsic unsafe | |
| 214 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_u16 | function | * Neon intrinsic unsafe | |
| 215 | core::core_arch::aarch64::neon::generated | vluti2q_laneq_u8 | function | * Neon intrinsic unsafe | |
| 216 | core::core_arch::aarch64::neon::generated | vluti4q_lane_f16_x2 | function | * Neon intrinsic unsafe | |
| 217 | core::core_arch::aarch64::neon::generated | vluti4q_lane_p16_x2 | function | * Neon intrinsic unsafe | |
| 218 | core::core_arch::aarch64::neon::generated | vluti4q_lane_p8 | function | * Neon intrinsic unsafe | |
| 219 | core::core_arch::aarch64::neon::generated | vluti4q_lane_s16_x2 | function | * Neon intrinsic unsafe | |
| 220 | core::core_arch::aarch64::neon::generated | vluti4q_lane_s8 | function | * Neon intrinsic unsafe | |
| 221 | core::core_arch::aarch64::neon::generated | vluti4q_lane_u16_x2 | function | * Neon intrinsic unsafe | |
| 222 | core::core_arch::aarch64::neon::generated | vluti4q_lane_u8 | function | * Neon intrinsic unsafe | |
| 223 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_f16_x2 | function | * Neon intrinsic unsafe | |
| 224 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_p16_x2 | function | * Neon intrinsic unsafe | |
| 225 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_p8 | function | * Neon intrinsic unsafe | |
| 226 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_s16_x2 | function | * Neon intrinsic unsafe | |
| 227 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_s8 | function | * Neon intrinsic unsafe | |
| 228 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_u16_x2 | function | * Neon intrinsic unsafe | |
| 229 | core::core_arch::aarch64::neon::generated | vluti4q_laneq_u8 | function | * Neon intrinsic unsafe | |
| 230 | core::core_arch::aarch64::neon::generated | vst1_f16 | function | * Neon intrinsic unsafe | |
| 231 | core::core_arch::aarch64::neon::generated | vst1_f32 | function | * Neon intrinsic unsafe | |
| 232 | core::core_arch::aarch64::neon::generated | vst1_f64 | function | * Neon intrinsic unsafe | |
| 233 | core::core_arch::aarch64::neon::generated | vst1_f64_x2 | function | * Neon intrinsic unsafe | |
| 234 | core::core_arch::aarch64::neon::generated | vst1_f64_x3 | function | * Neon intrinsic unsafe | |
| 235 | core::core_arch::aarch64::neon::generated | vst1_f64_x4 | function | * Neon intrinsic unsafe | |
| 236 | core::core_arch::aarch64::neon::generated | vst1_lane_f64 | function | * Neon intrinsic unsafe | |
| 237 | core::core_arch::aarch64::neon::generated | vst1_p16 | function | * Neon intrinsic unsafe | |
| 238 | core::core_arch::aarch64::neon::generated | vst1_p64 | function | * Neon intrinsic unsafe | |
| 239 | core::core_arch::aarch64::neon::generated | vst1_p8 | function | * Neon intrinsic unsafe | |
| 240 | core::core_arch::aarch64::neon::generated | vst1_s16 | function | * Neon intrinsic unsafe | |
| 241 | core::core_arch::aarch64::neon::generated | vst1_s32 | function | * Neon intrinsic unsafe | |
| 242 | core::core_arch::aarch64::neon::generated | vst1_s64 | function | * Neon intrinsic unsafe | |
| 243 | core::core_arch::aarch64::neon::generated | vst1_s8 | function | * Neon intrinsic unsafe | |
| 244 | core::core_arch::aarch64::neon::generated | vst1_u16 | function | * Neon intrinsic unsafe | |
| 245 | core::core_arch::aarch64::neon::generated | vst1_u32 | function | * Neon intrinsic unsafe | |
| 246 | core::core_arch::aarch64::neon::generated | vst1_u64 | function | * Neon intrinsic unsafe | |
| 247 | core::core_arch::aarch64::neon::generated | vst1_u8 | function | * Neon intrinsic unsafe | |
| 248 | core::core_arch::aarch64::neon::generated | vst1q_f16 | function | * Neon intrinsic unsafe | |
| 249 | core::core_arch::aarch64::neon::generated | vst1q_f32 | function | * Neon intrinsic unsafe | |
| 250 | core::core_arch::aarch64::neon::generated | vst1q_f64 | function | * Neon intrinsic unsafe | |
| 251 | core::core_arch::aarch64::neon::generated | vst1q_f64_x2 | function | * Neon intrinsic unsafe | |
| 252 | core::core_arch::aarch64::neon::generated | vst1q_f64_x3 | function | * Neon intrinsic unsafe | |
| 253 | core::core_arch::aarch64::neon::generated | vst1q_f64_x4 | function | * Neon intrinsic unsafe | |
| 254 | core::core_arch::aarch64::neon::generated | vst1q_lane_f64 | function | * Neon intrinsic unsafe | |
| 255 | core::core_arch::aarch64::neon::generated | vst1q_p16 | function | * Neon intrinsic unsafe | |
| 256 | core::core_arch::aarch64::neon::generated | vst1q_p64 | function | * Neon intrinsic unsafe | |
| 257 | core::core_arch::aarch64::neon::generated | vst1q_p8 | function | * Neon intrinsic unsafe | |
| 258 | core::core_arch::aarch64::neon::generated | vst1q_s16 | function | * Neon intrinsic unsafe | |
| 259 | core::core_arch::aarch64::neon::generated | vst1q_s32 | function | * Neon intrinsic unsafe | |
| 260 | core::core_arch::aarch64::neon::generated | vst1q_s64 | function | * Neon intrinsic unsafe | |
| 261 | core::core_arch::aarch64::neon::generated | vst1q_s8 | function | * Neon intrinsic unsafe | |
| 262 | core::core_arch::aarch64::neon::generated | vst1q_u16 | function | * Neon intrinsic unsafe | |
| 263 | core::core_arch::aarch64::neon::generated | vst1q_u32 | function | * Neon intrinsic unsafe | |
| 264 | core::core_arch::aarch64::neon::generated | vst1q_u64 | function | * Neon intrinsic unsafe | |
| 265 | core::core_arch::aarch64::neon::generated | vst1q_u8 | function | * Neon intrinsic unsafe | |
| 266 | core::core_arch::aarch64::neon::generated | vst2_f64 | function | * Neon intrinsic unsafe | |
| 267 | core::core_arch::aarch64::neon::generated | vst2_lane_f64 | function | * Neon intrinsic unsafe | |
| 268 | core::core_arch::aarch64::neon::generated | vst2_lane_p64 | function | * Neon intrinsic unsafe | |
| 269 | core::core_arch::aarch64::neon::generated | vst2_lane_s64 | function | * Neon intrinsic unsafe | |
| 270 | core::core_arch::aarch64::neon::generated | vst2_lane_u64 | function | * Neon intrinsic unsafe | |
| 271 | core::core_arch::aarch64::neon::generated | vst2q_f64 | function | * Neon intrinsic unsafe | |
| 272 | core::core_arch::aarch64::neon::generated | vst2q_lane_f64 | function | * Neon intrinsic unsafe | |
| 273 | core::core_arch::aarch64::neon::generated | vst2q_lane_p64 | function | * Neon intrinsic unsafe | |
| 274 | core::core_arch::aarch64::neon::generated | vst2q_lane_p8 | function | * Neon intrinsic unsafe | |
| 275 | core::core_arch::aarch64::neon::generated | vst2q_lane_s64 | function | * Neon intrinsic unsafe | |
| 276 | core::core_arch::aarch64::neon::generated | vst2q_lane_s8 | function | * Neon intrinsic unsafe | |
| 277 | core::core_arch::aarch64::neon::generated | vst2q_lane_u64 | function | * Neon intrinsic unsafe | |
| 278 | core::core_arch::aarch64::neon::generated | vst2q_lane_u8 | function | * Neon intrinsic unsafe | |
| 279 | core::core_arch::aarch64::neon::generated | vst2q_p64 | function | * Neon intrinsic unsafe | |
| 280 | core::core_arch::aarch64::neon::generated | vst2q_s64 | function | * Neon intrinsic unsafe | |
| 281 | core::core_arch::aarch64::neon::generated | vst2q_u64 | function | * Neon intrinsic unsafe | |
| 282 | core::core_arch::aarch64::neon::generated | vst3_f64 | function | * Neon intrinsic unsafe | |
| 283 | core::core_arch::aarch64::neon::generated | vst3_lane_f64 | function | * Neon intrinsic unsafe | |
| 284 | core::core_arch::aarch64::neon::generated | vst3_lane_p64 | function | * Neon intrinsic unsafe | |
| 285 | core::core_arch::aarch64::neon::generated | vst3_lane_s64 | function | * Neon intrinsic unsafe | |
| 286 | core::core_arch::aarch64::neon::generated | vst3_lane_u64 | function | * Neon intrinsic unsafe | |
| 287 | core::core_arch::aarch64::neon::generated | vst3q_f64 | function | * Neon intrinsic unsafe | |
| 288 | core::core_arch::aarch64::neon::generated | vst3q_lane_f64 | function | * Neon intrinsic unsafe | |
| 289 | core::core_arch::aarch64::neon::generated | vst3q_lane_p64 | function | * Neon intrinsic unsafe | |
| 290 | core::core_arch::aarch64::neon::generated | vst3q_lane_p8 | function | * Neon intrinsic unsafe | |
| 291 | core::core_arch::aarch64::neon::generated | vst3q_lane_s64 | function | * Neon intrinsic unsafe | |
| 292 | core::core_arch::aarch64::neon::generated | vst3q_lane_s8 | function | * Neon intrinsic unsafe | |
| 293 | core::core_arch::aarch64::neon::generated | vst3q_lane_u64 | function | * Neon intrinsic unsafe | |
| 294 | core::core_arch::aarch64::neon::generated | vst3q_lane_u8 | function | * Neon intrinsic unsafe | |
| 295 | core::core_arch::aarch64::neon::generated | vst3q_p64 | function | * Neon intrinsic unsafe | |
| 296 | core::core_arch::aarch64::neon::generated | vst3q_s64 | function | * Neon intrinsic unsafe | |
| 297 | core::core_arch::aarch64::neon::generated | vst3q_u64 | function | * Neon intrinsic unsafe | |
| 298 | core::core_arch::aarch64::neon::generated | vst4_f64 | function | * Neon intrinsic unsafe | |
| 299 | core::core_arch::aarch64::neon::generated | vst4_lane_f64 | function | * Neon intrinsic unsafe | |
| 300 | core::core_arch::aarch64::neon::generated | vst4_lane_p64 | function | * Neon intrinsic unsafe | |
| 301 | core::core_arch::aarch64::neon::generated | vst4_lane_s64 | function | * Neon intrinsic unsafe | |
| 302 | core::core_arch::aarch64::neon::generated | vst4_lane_u64 | function | * Neon intrinsic unsafe | |
| 303 | core::core_arch::aarch64::neon::generated | vst4q_f64 | function | * Neon intrinsic unsafe | |
| 304 | core::core_arch::aarch64::neon::generated | vst4q_lane_f64 | function | * Neon intrinsic unsafe | |
| 305 | core::core_arch::aarch64::neon::generated | vst4q_lane_p64 | function | * Neon intrinsic unsafe | |
| 306 | core::core_arch::aarch64::neon::generated | vst4q_lane_p8 | function | * Neon intrinsic unsafe | |
| 307 | core::core_arch::aarch64::neon::generated | vst4q_lane_s64 | function | * Neon intrinsic unsafe | |
| 308 | core::core_arch::aarch64::neon::generated | vst4q_lane_s8 | function | * Neon intrinsic unsafe | |
| 309 | core::core_arch::aarch64::neon::generated | vst4q_lane_u64 | function | * Neon intrinsic unsafe | |
| 310 | core::core_arch::aarch64::neon::generated | vst4q_lane_u8 | function | * Neon intrinsic unsafe | |
| 311 | core::core_arch::aarch64::neon::generated | vst4q_p64 | function | * Neon intrinsic unsafe | |
| 312 | core::core_arch::aarch64::neon::generated | vst4q_s64 | function | * Neon intrinsic unsafe | |
| 313 | core::core_arch::aarch64::neon::generated | vst4q_u64 | function | * Neon intrinsic unsafe | |
| 314 | core::core_arch::aarch64::prefetch | _prefetch | function | ||
| 315 | core::core_arch::amdgpu | ds_bpermute | function | ||
| 316 | core::core_arch::amdgpu | ds_permute | function | ||
| 317 | core::core_arch::amdgpu | perm | function | ||
| 318 | core::core_arch::amdgpu | permlane16_swap | function | ||
| 319 | core::core_arch::amdgpu | permlane16_u32 | function | ||
| 320 | core::core_arch::amdgpu | permlane16_var | function | ||
| 321 | core::core_arch::amdgpu | permlane32_swap | function | ||
| 322 | core::core_arch::amdgpu | permlane64_u32 | function | ||
| 323 | core::core_arch::amdgpu | permlanex16_u32 | function | ||
| 324 | core::core_arch::amdgpu | permlanex16_var | function | ||
| 325 | core::core_arch::amdgpu | readlane_u32 | function | ||
| 326 | core::core_arch::amdgpu | readlane_u64 | function | ||
| 327 | core::core_arch::amdgpu | s_barrier_signal | function | ||
| 328 | core::core_arch::amdgpu | s_barrier_signal_isfirst | function | ||
| 329 | core::core_arch::amdgpu | s_barrier_wait | function | ||
| 330 | core::core_arch::amdgpu | s_get_barrier_state | function | ||
| 331 | core::core_arch::amdgpu | sched_barrier | function | ||
| 332 | core::core_arch::amdgpu | sched_group_barrier | function | ||
| 333 | core::core_arch::amdgpu | update_dpp | function | ||
| 334 | core::core_arch::amdgpu | writelane_u32 | function | ||
| 335 | core::core_arch::amdgpu | writelane_u64 | function | ||
| 336 | core::core_arch::arm::dsp | __qadd | function | ||
| 337 | core::core_arch::arm::dsp | __qdbl | function | ||
| 338 | core::core_arch::arm::dsp | __qsub | function | ||
| 339 | core::core_arch::arm::dsp | __smlabb | function | ||
| 340 | core::core_arch::arm::dsp | __smlabt | function | ||
| 341 | core::core_arch::arm::dsp | __smlatb | function | ||
| 342 | core::core_arch::arm::dsp | __smlatt | function | ||
| 343 | core::core_arch::arm::dsp | __smlawb | function | ||
| 344 | core::core_arch::arm::dsp | __smlawt | function | ||
| 345 | core::core_arch::arm::dsp | __smulbb | function | ||
| 346 | core::core_arch::arm::dsp | __smulbt | function | ||
| 347 | core::core_arch::arm::dsp | __smultb | function | ||
| 348 | core::core_arch::arm::dsp | __smultt | function | ||
| 349 | core::core_arch::arm::dsp | __smulwb | function | ||
| 350 | core::core_arch::arm::dsp | __smulwt | function | ||
| 351 | core::core_arch::arm::sat | __ssat | function | ||
| 352 | core::core_arch::arm::sat | __usat | function | ||
| 353 | core::core_arch::arm::simd32 | __qadd16 | function | ||
| 354 | core::core_arch::arm::simd32 | __qadd8 | function | ||
| 355 | core::core_arch::arm::simd32 | __qasx | function | ||
| 356 | core::core_arch::arm::simd32 | __qsax | function | ||
| 357 | core::core_arch::arm::simd32 | __qsub16 | function | ||
| 358 | core::core_arch::arm::simd32 | __qsub8 | function | ||
| 359 | core::core_arch::arm::simd32 | __sadd16 | function | ||
| 360 | core::core_arch::arm::simd32 | __sadd8 | function | ||
| 361 | core::core_arch::arm::simd32 | __sasx | function | ||
| 362 | core::core_arch::arm::simd32 | __sel | function | ||
| 363 | core::core_arch::arm::simd32 | __shadd16 | function | ||
| 364 | core::core_arch::arm::simd32 | __shadd8 | function | ||
| 365 | core::core_arch::arm::simd32 | __shsub16 | function | ||
| 366 | core::core_arch::arm::simd32 | __shsub8 | function | ||
| 367 | core::core_arch::arm::simd32 | __smlad | function | ||
| 368 | core::core_arch::arm::simd32 | __smlsd | function | ||
| 369 | core::core_arch::arm::simd32 | __smuad | function | ||
| 370 | core::core_arch::arm::simd32 | __smuadx | function | ||
| 371 | core::core_arch::arm::simd32 | __smusd | function | ||
| 372 | core::core_arch::arm::simd32 | __smusdx | function | ||
| 373 | core::core_arch::arm::simd32 | __ssub8 | function | ||
| 374 | core::core_arch::arm::simd32 | __usad8 | function | ||
| 375 | core::core_arch::arm::simd32 | __usada8 | function | ||
| 376 | core::core_arch::arm::simd32 | __usub8 | function | ||
| 377 | core::core_arch::arm_shared::barrier | __dmb | function | ||
| 378 | core::core_arch::arm_shared::barrier | __dsb | function | ||
| 379 | core::core_arch::arm_shared::barrier | __isb | function | ||
| 380 | core::core_arch::arm_shared::hints | __nop | function | ||
| 381 | core::core_arch::arm_shared::hints | __sev | function | ||
| 382 | core::core_arch::arm_shared::hints | __sevl | function | ||
| 383 | core::core_arch::arm_shared::hints | __wfe | function | ||
| 384 | core::core_arch::arm_shared::hints | __wfi | function | ||
| 385 | core::core_arch::arm_shared::hints | __yield | function | ||
| 386 | core::core_arch::arm_shared::neon::generated | vext_s64 | function | * Neon intrinsic unsafe | |
| 387 | core::core_arch::arm_shared::neon::generated | vext_u64 | function | * Neon intrinsic unsafe | |
| 388 | core::core_arch::arm_shared::neon::generated | vld1_dup_f16 | function | * Neon intrinsic unsafe | |
| 389 | core::core_arch::arm_shared::neon::generated | vld1_dup_f32 | function | * Neon intrinsic unsafe | |
| 390 | core::core_arch::arm_shared::neon::generated | vld1_dup_p16 | function | * Neon intrinsic unsafe | |
| 391 | core::core_arch::arm_shared::neon::generated | vld1_dup_p64 | function | * Neon intrinsic unsafe | |
| 392 | core::core_arch::arm_shared::neon::generated | vld1_dup_p8 | function | * Neon intrinsic unsafe | |
| 393 | core::core_arch::arm_shared::neon::generated | vld1_dup_s16 | function | * Neon intrinsic unsafe | |
| 394 | core::core_arch::arm_shared::neon::generated | vld1_dup_s32 | function | * Neon intrinsic unsafe | |
| 395 | core::core_arch::arm_shared::neon::generated | vld1_dup_s64 | function | * Neon intrinsic unsafe | |
| 396 | core::core_arch::arm_shared::neon::generated | vld1_dup_s8 | function | * Neon intrinsic unsafe | |
| 397 | core::core_arch::arm_shared::neon::generated | vld1_dup_u16 | function | * Neon intrinsic unsafe | |
| 398 | core::core_arch::arm_shared::neon::generated | vld1_dup_u32 | function | * Neon intrinsic unsafe | |
| 399 | core::core_arch::arm_shared::neon::generated | vld1_dup_u64 | function | * Neon intrinsic unsafe | |
| 400 | core::core_arch::arm_shared::neon::generated | vld1_dup_u8 | function | * Neon intrinsic unsafe | |
| 401 | core::core_arch::arm_shared::neon::generated | vld1_f16_x2 | function | * Neon intrinsic unsafe | |
| 402 | core::core_arch::arm_shared::neon::generated | vld1_f16_x3 | function | * Neon intrinsic unsafe | |
| 403 | core::core_arch::arm_shared::neon::generated | vld1_f16_x4 | function | * Neon intrinsic unsafe | |
| 404 | core::core_arch::arm_shared::neon::generated | vld1_f32_x2 | function | * Neon intrinsic unsafe | |
| 405 | core::core_arch::arm_shared::neon::generated | vld1_f32_x3 | function | * Neon intrinsic unsafe | |
| 406 | core::core_arch::arm_shared::neon::generated | vld1_f32_x4 | function | * Neon intrinsic unsafe | |
| 407 | core::core_arch::arm_shared::neon::generated | vld1_lane_f16 | function | * Neon intrinsic unsafe | |
| 408 | core::core_arch::arm_shared::neon::generated | vld1_lane_f32 | function | * Neon intrinsic unsafe | |
| 409 | core::core_arch::arm_shared::neon::generated | vld1_lane_p16 | function | * Neon intrinsic unsafe | |
| 410 | core::core_arch::arm_shared::neon::generated | vld1_lane_p64 | function | * Neon intrinsic unsafe | |
| 411 | core::core_arch::arm_shared::neon::generated | vld1_lane_p8 | function | * Neon intrinsic unsafe | |
| 412 | core::core_arch::arm_shared::neon::generated | vld1_lane_s16 | function | * Neon intrinsic unsafe | |
| 413 | core::core_arch::arm_shared::neon::generated | vld1_lane_s32 | function | * Neon intrinsic unsafe | |
| 414 | core::core_arch::arm_shared::neon::generated | vld1_lane_s64 | function | * Neon intrinsic unsafe | |
| 415 | core::core_arch::arm_shared::neon::generated | vld1_lane_s8 | function | * Neon intrinsic unsafe | |
| 416 | core::core_arch::arm_shared::neon::generated | vld1_lane_u16 | function | * Neon intrinsic unsafe | |
| 417 | core::core_arch::arm_shared::neon::generated | vld1_lane_u32 | function | * Neon intrinsic unsafe | |
| 418 | core::core_arch::arm_shared::neon::generated | vld1_lane_u64 | function | * Neon intrinsic unsafe | |
| 419 | core::core_arch::arm_shared::neon::generated | vld1_lane_u8 | function | * Neon intrinsic unsafe | |
| 420 | core::core_arch::arm_shared::neon::generated | vld1_p16_x2 | function | * Neon intrinsic unsafe | |
| 421 | core::core_arch::arm_shared::neon::generated | vld1_p16_x3 | function | * Neon intrinsic unsafe | |
| 422 | core::core_arch::arm_shared::neon::generated | vld1_p16_x4 | function | * Neon intrinsic unsafe | |
| 423 | core::core_arch::arm_shared::neon::generated | vld1_p64_x2 | function | * Neon intrinsic unsafe | |
| 424 | core::core_arch::arm_shared::neon::generated | vld1_p64_x3 | function | * Neon intrinsic unsafe | |
| 425 | core::core_arch::arm_shared::neon::generated | vld1_p64_x4 | function | * Neon intrinsic unsafe | |
| 426 | core::core_arch::arm_shared::neon::generated | vld1_p8_x2 | function | * Neon intrinsic unsafe | |
| 427 | core::core_arch::arm_shared::neon::generated | vld1_p8_x3 | function | * Neon intrinsic unsafe | |
| 428 | core::core_arch::arm_shared::neon::generated | vld1_p8_x4 | function | * Neon intrinsic unsafe | |
| 429 | core::core_arch::arm_shared::neon::generated | vld1_s16_x2 | function | * Neon intrinsic unsafe | |
| 430 | core::core_arch::arm_shared::neon::generated | vld1_s16_x3 | function | * Neon intrinsic unsafe | |
| 431 | core::core_arch::arm_shared::neon::generated | vld1_s16_x4 | function | * Neon intrinsic unsafe | |
| 432 | core::core_arch::arm_shared::neon::generated | vld1_s32_x2 | function | * Neon intrinsic unsafe | |
| 433 | core::core_arch::arm_shared::neon::generated | vld1_s32_x3 | function | * Neon intrinsic unsafe | |
| 434 | core::core_arch::arm_shared::neon::generated | vld1_s32_x4 | function | * Neon intrinsic unsafe | |
| 435 | core::core_arch::arm_shared::neon::generated | vld1_s64_x2 | function | * Neon intrinsic unsafe | |
| 436 | core::core_arch::arm_shared::neon::generated | vld1_s64_x3 | function | * Neon intrinsic unsafe | |
| 437 | core::core_arch::arm_shared::neon::generated | vld1_s64_x4 | function | * Neon intrinsic unsafe | |
| 438 | core::core_arch::arm_shared::neon::generated | vld1_s8_x2 | function | * Neon intrinsic unsafe | |
| 439 | core::core_arch::arm_shared::neon::generated | vld1_s8_x3 | function | * Neon intrinsic unsafe | |
| 440 | core::core_arch::arm_shared::neon::generated | vld1_s8_x4 | function | * Neon intrinsic unsafe | |
| 441 | core::core_arch::arm_shared::neon::generated | vld1_u16_x2 | function | * Neon intrinsic unsafe | |
| 442 | core::core_arch::arm_shared::neon::generated | vld1_u16_x3 | function | * Neon intrinsic unsafe | |
| 443 | core::core_arch::arm_shared::neon::generated | vld1_u16_x4 | function | * Neon intrinsic unsafe | |
| 444 | core::core_arch::arm_shared::neon::generated | vld1_u32_x2 | function | * Neon intrinsic unsafe | |
| 445 | core::core_arch::arm_shared::neon::generated | vld1_u32_x3 | function | * Neon intrinsic unsafe | |
| 446 | core::core_arch::arm_shared::neon::generated | vld1_u32_x4 | function | * Neon intrinsic unsafe | |
| 447 | core::core_arch::arm_shared::neon::generated | vld1_u64_x2 | function | * Neon intrinsic unsafe | |
| 448 | core::core_arch::arm_shared::neon::generated | vld1_u64_x3 | function | * Neon intrinsic unsafe | |
| 449 | core::core_arch::arm_shared::neon::generated | vld1_u64_x4 | function | * Neon intrinsic unsafe | |
| 450 | core::core_arch::arm_shared::neon::generated | vld1_u8_x2 | function | * Neon intrinsic unsafe | |
| 451 | core::core_arch::arm_shared::neon::generated | vld1_u8_x3 | function | * Neon intrinsic unsafe | |
| 452 | core::core_arch::arm_shared::neon::generated | vld1_u8_x4 | function | * Neon intrinsic unsafe | |
| 453 | core::core_arch::arm_shared::neon::generated | vld1q_dup_f16 | function | * Neon intrinsic unsafe | |
| 454 | core::core_arch::arm_shared::neon::generated | vld1q_dup_f32 | function | * Neon intrinsic unsafe | |
| 455 | core::core_arch::arm_shared::neon::generated | vld1q_dup_p16 | function | * Neon intrinsic unsafe | |
| 456 | core::core_arch::arm_shared::neon::generated | vld1q_dup_p64 | function | * Neon intrinsic unsafe | |
| 457 | core::core_arch::arm_shared::neon::generated | vld1q_dup_p8 | function | * Neon intrinsic unsafe | |
| 458 | core::core_arch::arm_shared::neon::generated | vld1q_dup_s16 | function | * Neon intrinsic unsafe | |
| 459 | core::core_arch::arm_shared::neon::generated | vld1q_dup_s32 | function | * Neon intrinsic unsafe | |
| 460 | core::core_arch::arm_shared::neon::generated | vld1q_dup_s64 | function | * Neon intrinsic unsafe | |
| 461 | core::core_arch::arm_shared::neon::generated | vld1q_dup_s8 | function | * Neon intrinsic unsafe | |
| 462 | core::core_arch::arm_shared::neon::generated | vld1q_dup_u16 | function | * Neon intrinsic unsafe | |
| 463 | core::core_arch::arm_shared::neon::generated | vld1q_dup_u32 | function | * Neon intrinsic unsafe | |
| 464 | core::core_arch::arm_shared::neon::generated | vld1q_dup_u64 | function | * Neon intrinsic unsafe | |
| 465 | core::core_arch::arm_shared::neon::generated | vld1q_dup_u8 | function | * Neon intrinsic unsafe | |
| 466 | core::core_arch::arm_shared::neon::generated | vld1q_f16_x2 | function | * Neon intrinsic unsafe | |
| 467 | core::core_arch::arm_shared::neon::generated | vld1q_f16_x3 | function | * Neon intrinsic unsafe | |
| 468 | core::core_arch::arm_shared::neon::generated | vld1q_f16_x4 | function | * Neon intrinsic unsafe | |
| 469 | core::core_arch::arm_shared::neon::generated | vld1q_f32_x2 | function | * Neon intrinsic unsafe | |
| 470 | core::core_arch::arm_shared::neon::generated | vld1q_f32_x3 | function | * Neon intrinsic unsafe | |
| 471 | core::core_arch::arm_shared::neon::generated | vld1q_f32_x4 | function | * Neon intrinsic unsafe | |
| 472 | core::core_arch::arm_shared::neon::generated | vld1q_lane_f16 | function | * Neon intrinsic unsafe | |
| 473 | core::core_arch::arm_shared::neon::generated | vld1q_lane_f32 | function | * Neon intrinsic unsafe | |
| 474 | core::core_arch::arm_shared::neon::generated | vld1q_lane_p16 | function | * Neon intrinsic unsafe | |
| 475 | core::core_arch::arm_shared::neon::generated | vld1q_lane_p64 | function | * Neon intrinsic unsafe | |
| 476 | core::core_arch::arm_shared::neon::generated | vld1q_lane_p8 | function | * Neon intrinsic unsafe | |
| 477 | core::core_arch::arm_shared::neon::generated | vld1q_lane_s16 | function | * Neon intrinsic unsafe | |
| 478 | core::core_arch::arm_shared::neon::generated | vld1q_lane_s32 | function | * Neon intrinsic unsafe | |
| 479 | core::core_arch::arm_shared::neon::generated | vld1q_lane_s64 | function | * Neon intrinsic unsafe | |
| 480 | core::core_arch::arm_shared::neon::generated | vld1q_lane_s8 | function | * Neon intrinsic unsafe | |
| 481 | core::core_arch::arm_shared::neon::generated | vld1q_lane_u16 | function | * Neon intrinsic unsafe | |
| 482 | core::core_arch::arm_shared::neon::generated | vld1q_lane_u32 | function | * Neon intrinsic unsafe | |
| 483 | core::core_arch::arm_shared::neon::generated | vld1q_lane_u64 | function | * Neon intrinsic unsafe | |
| 484 | core::core_arch::arm_shared::neon::generated | vld1q_lane_u8 | function | * Neon intrinsic unsafe | |
| 485 | core::core_arch::arm_shared::neon::generated | vld1q_p16_x2 | function | * Neon intrinsic unsafe | |
| 486 | core::core_arch::arm_shared::neon::generated | vld1q_p16_x3 | function | * Neon intrinsic unsafe | |
| 487 | core::core_arch::arm_shared::neon::generated | vld1q_p16_x4 | function | * Neon intrinsic unsafe | |
| 488 | core::core_arch::arm_shared::neon::generated | vld1q_p64_x2 | function | * Neon intrinsic unsafe | |
| 489 | core::core_arch::arm_shared::neon::generated | vld1q_p64_x3 | function | * Neon intrinsic unsafe | |
| 490 | core::core_arch::arm_shared::neon::generated | vld1q_p64_x4 | function | * Neon intrinsic unsafe | |
| 491 | core::core_arch::arm_shared::neon::generated | vld1q_p8_x2 | function | * Neon intrinsic unsafe | |
| 492 | core::core_arch::arm_shared::neon::generated | vld1q_p8_x3 | function | * Neon intrinsic unsafe | |
| 493 | core::core_arch::arm_shared::neon::generated | vld1q_p8_x4 | function | * Neon intrinsic unsafe | |
| 494 | core::core_arch::arm_shared::neon::generated | vld1q_s16_x2 | function | * Neon intrinsic unsafe | |
| 495 | core::core_arch::arm_shared::neon::generated | vld1q_s16_x3 | function | * Neon intrinsic unsafe | |
| 496 | core::core_arch::arm_shared::neon::generated | vld1q_s16_x4 | function | * Neon intrinsic unsafe | |
| 497 | core::core_arch::arm_shared::neon::generated | vld1q_s32_x2 | function | * Neon intrinsic unsafe | |
| 498 | core::core_arch::arm_shared::neon::generated | vld1q_s32_x3 | function | * Neon intrinsic unsafe | |
| 499 | core::core_arch::arm_shared::neon::generated | vld1q_s32_x4 | function | * Neon intrinsic unsafe | |
| 500 | core::core_arch::arm_shared::neon::generated | vld1q_s64_x2 | function | * Neon intrinsic unsafe | |
| 501 | core::core_arch::arm_shared::neon::generated | vld1q_s64_x3 | function | * Neon intrinsic unsafe | |
| 502 | core::core_arch::arm_shared::neon::generated | vld1q_s64_x4 | function | * Neon intrinsic unsafe | |
| 503 | core::core_arch::arm_shared::neon::generated | vld1q_s8_x2 | function | * Neon intrinsic unsafe | |
| 504 | core::core_arch::arm_shared::neon::generated | vld1q_s8_x3 | function | * Neon intrinsic unsafe | |
| 505 | core::core_arch::arm_shared::neon::generated | vld1q_s8_x4 | function | * Neon intrinsic unsafe | |
| 506 | core::core_arch::arm_shared::neon::generated | vld1q_u16_x2 | function | * Neon intrinsic unsafe | |
| 507 | core::core_arch::arm_shared::neon::generated | vld1q_u16_x3 | function | * Neon intrinsic unsafe | |
| 508 | core::core_arch::arm_shared::neon::generated | vld1q_u16_x4 | function | * Neon intrinsic unsafe | |
| 509 | core::core_arch::arm_shared::neon::generated | vld1q_u32_x2 | function | * Neon intrinsic unsafe | |
| 510 | core::core_arch::arm_shared::neon::generated | vld1q_u32_x3 | function | * Neon intrinsic unsafe | |
| 511 | core::core_arch::arm_shared::neon::generated | vld1q_u32_x4 | function | * Neon intrinsic unsafe | |
| 512 | core::core_arch::arm_shared::neon::generated | vld1q_u64_x2 | function | * Neon intrinsic unsafe | |
| 513 | core::core_arch::arm_shared::neon::generated | vld1q_u64_x3 | function | * Neon intrinsic unsafe | |
| 514 | core::core_arch::arm_shared::neon::generated | vld1q_u64_x4 | function | * Neon intrinsic unsafe | |
| 515 | core::core_arch::arm_shared::neon::generated | vld1q_u8_x2 | function | * Neon intrinsic unsafe | |
| 516 | core::core_arch::arm_shared::neon::generated | vld1q_u8_x3 | function | * Neon intrinsic unsafe | |
| 517 | core::core_arch::arm_shared::neon::generated | vld1q_u8_x4 | function | * Neon intrinsic unsafe | |
| 518 | core::core_arch::arm_shared::neon::generated | vld2_dup_f16 | function | * Neon intrinsic unsafe | |
| 519 | core::core_arch::arm_shared::neon::generated | vld2_dup_f32 | function | * Neon intrinsic unsafe | |
| 520 | core::core_arch::arm_shared::neon::generated | vld2_dup_p16 | function | * Neon intrinsic unsafe | |
| 521 | core::core_arch::arm_shared::neon::generated | vld2_dup_p64 | function | * Neon intrinsic unsafe | |
| 522 | core::core_arch::arm_shared::neon::generated | vld2_dup_p8 | function | * Neon intrinsic unsafe | |
| 523 | core::core_arch::arm_shared::neon::generated | vld2_dup_s16 | function | * Neon intrinsic unsafe | |
| 524 | core::core_arch::arm_shared::neon::generated | vld2_dup_s32 | function | * Neon intrinsic unsafe | |
| 525 | core::core_arch::arm_shared::neon::generated | vld2_dup_s64 | function | * Neon intrinsic unsafe | |
| 526 | core::core_arch::arm_shared::neon::generated | vld2_dup_s8 | function | * Neon intrinsic unsafe | |
| 527 | core::core_arch::arm_shared::neon::generated | vld2_dup_u16 | function | * Neon intrinsic unsafe | |
| 528 | core::core_arch::arm_shared::neon::generated | vld2_dup_u32 | function | * Neon intrinsic unsafe | |
| 529 | core::core_arch::arm_shared::neon::generated | vld2_dup_u64 | function | * Neon intrinsic unsafe | |
| 530 | core::core_arch::arm_shared::neon::generated | vld2_dup_u8 | function | * Neon intrinsic unsafe | |
| 531 | core::core_arch::arm_shared::neon::generated | vld2_f16 | function | * Neon intrinsic unsafe | |
| 532 | core::core_arch::arm_shared::neon::generated | vld2_f32 | function | * Neon intrinsic unsafe | |
| 533 | core::core_arch::arm_shared::neon::generated | vld2_lane_f16 | function | * Neon intrinsic unsafe | |
| 534 | core::core_arch::arm_shared::neon::generated | vld2_lane_f32 | function | * Neon intrinsic unsafe | |
| 535 | core::core_arch::arm_shared::neon::generated | vld2_lane_p16 | function | * Neon intrinsic unsafe | |
| 536 | core::core_arch::arm_shared::neon::generated | vld2_lane_p8 | function | * Neon intrinsic unsafe | |
| 537 | core::core_arch::arm_shared::neon::generated | vld2_lane_s16 | function | * Neon intrinsic unsafe | |
| 538 | core::core_arch::arm_shared::neon::generated | vld2_lane_s32 | function | * Neon intrinsic unsafe | |
| 539 | core::core_arch::arm_shared::neon::generated | vld2_lane_s8 | function | * Neon intrinsic unsafe | |
| 540 | core::core_arch::arm_shared::neon::generated | vld2_lane_u16 | function | * Neon intrinsic unsafe | |
| 541 | core::core_arch::arm_shared::neon::generated | vld2_lane_u32 | function | * Neon intrinsic unsafe | |
| 542 | core::core_arch::arm_shared::neon::generated | vld2_lane_u8 | function | * Neon intrinsic unsafe | |
| 543 | core::core_arch::arm_shared::neon::generated | vld2_p16 | function | * Neon intrinsic unsafe | |
| 544 | core::core_arch::arm_shared::neon::generated | vld2_p64 | function | * Neon intrinsic unsafe | |
| 545 | core::core_arch::arm_shared::neon::generated | vld2_p8 | function | * Neon intrinsic unsafe | |
| 546 | core::core_arch::arm_shared::neon::generated | vld2_s16 | function | * Neon intrinsic unsafe | |
| 547 | core::core_arch::arm_shared::neon::generated | vld2_s32 | function | * Neon intrinsic unsafe | |
| 548 | core::core_arch::arm_shared::neon::generated | vld2_s64 | function | * Neon intrinsic unsafe | |
| 549 | core::core_arch::arm_shared::neon::generated | vld2_s8 | function | * Neon intrinsic unsafe | |
| 550 | core::core_arch::arm_shared::neon::generated | vld2_u16 | function | * Neon intrinsic unsafe | |
| 551 | core::core_arch::arm_shared::neon::generated | vld2_u32 | function | * Neon intrinsic unsafe | |
| 552 | core::core_arch::arm_shared::neon::generated | vld2_u64 | function | * Neon intrinsic unsafe | |
| 553 | core::core_arch::arm_shared::neon::generated | vld2_u8 | function | * Neon intrinsic unsafe | |
| 554 | core::core_arch::arm_shared::neon::generated | vld2q_dup_f16 | function | * Neon intrinsic unsafe | |
| 555 | core::core_arch::arm_shared::neon::generated | vld2q_dup_f32 | function | * Neon intrinsic unsafe | |
| 556 | core::core_arch::arm_shared::neon::generated | vld2q_dup_p16 | function | * Neon intrinsic unsafe | |
| 557 | core::core_arch::arm_shared::neon::generated | vld2q_dup_p8 | function | * Neon intrinsic unsafe | |
| 558 | core::core_arch::arm_shared::neon::generated | vld2q_dup_s16 | function | * Neon intrinsic unsafe | |
| 559 | core::core_arch::arm_shared::neon::generated | vld2q_dup_s32 | function | * Neon intrinsic unsafe | |
| 560 | core::core_arch::arm_shared::neon::generated | vld2q_dup_s8 | function | * Neon intrinsic unsafe | |
| 561 | core::core_arch::arm_shared::neon::generated | vld2q_dup_u16 | function | * Neon intrinsic unsafe | |
| 562 | core::core_arch::arm_shared::neon::generated | vld2q_dup_u32 | function | * Neon intrinsic unsafe | |
| 563 | core::core_arch::arm_shared::neon::generated | vld2q_dup_u8 | function | * Neon intrinsic unsafe | |
| 564 | core::core_arch::arm_shared::neon::generated | vld2q_f16 | function | * Neon intrinsic unsafe | |
| 565 | core::core_arch::arm_shared::neon::generated | vld2q_f32 | function | * Neon intrinsic unsafe | |
| 566 | core::core_arch::arm_shared::neon::generated | vld2q_lane_f16 | function | * Neon intrinsic unsafe | |
| 567 | core::core_arch::arm_shared::neon::generated | vld2q_lane_f32 | function | * Neon intrinsic unsafe | |
| 568 | core::core_arch::arm_shared::neon::generated | vld2q_lane_p16 | function | * Neon intrinsic unsafe | |
| 569 | core::core_arch::arm_shared::neon::generated | vld2q_lane_s16 | function | * Neon intrinsic unsafe | |
| 570 | core::core_arch::arm_shared::neon::generated | vld2q_lane_s32 | function | * Neon intrinsic unsafe | |
| 571 | core::core_arch::arm_shared::neon::generated | vld2q_lane_u16 | function | * Neon intrinsic unsafe | |
| 572 | core::core_arch::arm_shared::neon::generated | vld2q_lane_u32 | function | * Neon intrinsic unsafe | |
| 573 | core::core_arch::arm_shared::neon::generated | vld2q_p16 | function | * Neon intrinsic unsafe | |
| 574 | core::core_arch::arm_shared::neon::generated | vld2q_p8 | function | * Neon intrinsic unsafe | |
| 575 | core::core_arch::arm_shared::neon::generated | vld2q_s16 | function | * Neon intrinsic unsafe | |
| 576 | core::core_arch::arm_shared::neon::generated | vld2q_s32 | function | * Neon intrinsic unsafe | |
| 577 | core::core_arch::arm_shared::neon::generated | vld2q_s8 | function | * Neon intrinsic unsafe | |
| 578 | core::core_arch::arm_shared::neon::generated | vld2q_u16 | function | * Neon intrinsic unsafe | |
| 579 | core::core_arch::arm_shared::neon::generated | vld2q_u32 | function | * Neon intrinsic unsafe | |
| 580 | core::core_arch::arm_shared::neon::generated | vld2q_u8 | function | * Neon intrinsic unsafe | |
| 581 | core::core_arch::arm_shared::neon::generated | vld3_dup_f16 | function | * Neon intrinsic unsafe | |
| 582 | core::core_arch::arm_shared::neon::generated | vld3_dup_f32 | function | * Neon intrinsic unsafe | |
| 583 | core::core_arch::arm_shared::neon::generated | vld3_dup_p16 | function | * Neon intrinsic unsafe | |
| 584 | core::core_arch::arm_shared::neon::generated | vld3_dup_p64 | function | * Neon intrinsic unsafe | |
| 585 | core::core_arch::arm_shared::neon::generated | vld3_dup_p8 | function | * Neon intrinsic unsafe | |
| 586 | core::core_arch::arm_shared::neon::generated | vld3_dup_s16 | function | * Neon intrinsic unsafe | |
| 587 | core::core_arch::arm_shared::neon::generated | vld3_dup_s32 | function | * Neon intrinsic unsafe | |
| 588 | core::core_arch::arm_shared::neon::generated | vld3_dup_s64 | function | * Neon intrinsic unsafe | |
| 589 | core::core_arch::arm_shared::neon::generated | vld3_dup_s8 | function | * Neon intrinsic unsafe | |
| 590 | core::core_arch::arm_shared::neon::generated | vld3_dup_u16 | function | * Neon intrinsic unsafe | |
| 591 | core::core_arch::arm_shared::neon::generated | vld3_dup_u32 | function | * Neon intrinsic unsafe | |
| 592 | core::core_arch::arm_shared::neon::generated | vld3_dup_u64 | function | * Neon intrinsic unsafe | |
| 593 | core::core_arch::arm_shared::neon::generated | vld3_dup_u8 | function | * Neon intrinsic unsafe | |
| 594 | core::core_arch::arm_shared::neon::generated | vld3_f16 | function | * Neon intrinsic unsafe | |
| 595 | core::core_arch::arm_shared::neon::generated | vld3_f32 | function | * Neon intrinsic unsafe | |
| 596 | core::core_arch::arm_shared::neon::generated | vld3_lane_f16 | function | * Neon intrinsic unsafe | |
| 597 | core::core_arch::arm_shared::neon::generated | vld3_lane_f32 | function | * Neon intrinsic unsafe | |
| 598 | core::core_arch::arm_shared::neon::generated | vld3_lane_p16 | function | * Neon intrinsic unsafe | |
| 599 | core::core_arch::arm_shared::neon::generated | vld3_lane_p8 | function | * Neon intrinsic unsafe | |
| 600 | core::core_arch::arm_shared::neon::generated | vld3_lane_s16 | function | * Neon intrinsic unsafe | |
| 601 | core::core_arch::arm_shared::neon::generated | vld3_lane_s32 | function | * Neon intrinsic unsafe | |
| 602 | core::core_arch::arm_shared::neon::generated | vld3_lane_s8 | function | * Neon intrinsic unsafe | |
| 603 | core::core_arch::arm_shared::neon::generated | vld3_lane_u16 | function | * Neon intrinsic unsafe | |
| 604 | core::core_arch::arm_shared::neon::generated | vld3_lane_u32 | function | * Neon intrinsic unsafe | |
| 605 | core::core_arch::arm_shared::neon::generated | vld3_lane_u8 | function | * Neon intrinsic unsafe | |
| 606 | core::core_arch::arm_shared::neon::generated | vld3_p16 | function | * Neon intrinsic unsafe | |
| 607 | core::core_arch::arm_shared::neon::generated | vld3_p64 | function | * Neon intrinsic unsafe | |
| 608 | core::core_arch::arm_shared::neon::generated | vld3_p8 | function | * Neon intrinsic unsafe | |
| 609 | core::core_arch::arm_shared::neon::generated | vld3_s16 | function | * Neon intrinsic unsafe | |
| 610 | core::core_arch::arm_shared::neon::generated | vld3_s32 | function | * Neon intrinsic unsafe | |
| 611 | core::core_arch::arm_shared::neon::generated | vld3_s64 | function | * Neon intrinsic unsafe | |
| 612 | core::core_arch::arm_shared::neon::generated | vld3_s8 | function | * Neon intrinsic unsafe | |
| 613 | core::core_arch::arm_shared::neon::generated | vld3_u16 | function | * Neon intrinsic unsafe | |
| 614 | core::core_arch::arm_shared::neon::generated | vld3_u32 | function | * Neon intrinsic unsafe | |
| 615 | core::core_arch::arm_shared::neon::generated | vld3_u64 | function | * Neon intrinsic unsafe | |
| 616 | core::core_arch::arm_shared::neon::generated | vld3_u8 | function | * Neon intrinsic unsafe | |
| 617 | core::core_arch::arm_shared::neon::generated | vld3q_dup_f16 | function | * Neon intrinsic unsafe | |
| 618 | core::core_arch::arm_shared::neon::generated | vld3q_dup_f32 | function | * Neon intrinsic unsafe | |
| 619 | core::core_arch::arm_shared::neon::generated | vld3q_dup_p16 | function | * Neon intrinsic unsafe | |
| 620 | core::core_arch::arm_shared::neon::generated | vld3q_dup_p8 | function | * Neon intrinsic unsafe | |
| 621 | core::core_arch::arm_shared::neon::generated | vld3q_dup_s16 | function | * Neon intrinsic unsafe | |
| 622 | core::core_arch::arm_shared::neon::generated | vld3q_dup_s32 | function | * Neon intrinsic unsafe | |
| 623 | core::core_arch::arm_shared::neon::generated | vld3q_dup_s8 | function | * Neon intrinsic unsafe | |
| 624 | core::core_arch::arm_shared::neon::generated | vld3q_dup_u16 | function | * Neon intrinsic unsafe | |
| 625 | core::core_arch::arm_shared::neon::generated | vld3q_dup_u32 | function | * Neon intrinsic unsafe | |
| 626 | core::core_arch::arm_shared::neon::generated | vld3q_dup_u8 | function | * Neon intrinsic unsafe | |
| 627 | core::core_arch::arm_shared::neon::generated | vld3q_f16 | function | * Neon intrinsic unsafe | |
| 628 | core::core_arch::arm_shared::neon::generated | vld3q_f32 | function | * Neon intrinsic unsafe | |
| 629 | core::core_arch::arm_shared::neon::generated | vld3q_lane_f16 | function | * Neon intrinsic unsafe | |
| 630 | core::core_arch::arm_shared::neon::generated | vld3q_lane_f32 | function | * Neon intrinsic unsafe | |
| 631 | core::core_arch::arm_shared::neon::generated | vld3q_lane_p16 | function | * Neon intrinsic unsafe | |
| 632 | core::core_arch::arm_shared::neon::generated | vld3q_lane_s16 | function | * Neon intrinsic unsafe | |
| 633 | core::core_arch::arm_shared::neon::generated | vld3q_lane_s32 | function | * Neon intrinsic unsafe | |
| 634 | core::core_arch::arm_shared::neon::generated | vld3q_lane_u16 | function | * Neon intrinsic unsafe | |
| 635 | core::core_arch::arm_shared::neon::generated | vld3q_lane_u32 | function | * Neon intrinsic unsafe | |
| 636 | core::core_arch::arm_shared::neon::generated | vld3q_p16 | function | * Neon intrinsic unsafe | |
| 637 | core::core_arch::arm_shared::neon::generated | vld3q_p8 | function | * Neon intrinsic unsafe | |
| 638 | core::core_arch::arm_shared::neon::generated | vld3q_s16 | function | * Neon intrinsic unsafe | |
| 639 | core::core_arch::arm_shared::neon::generated | vld3q_s32 | function | * Neon intrinsic unsafe | |
| 640 | core::core_arch::arm_shared::neon::generated | vld3q_s8 | function | * Neon intrinsic unsafe | |
| 641 | core::core_arch::arm_shared::neon::generated | vld3q_u16 | function | * Neon intrinsic unsafe | |
| 642 | core::core_arch::arm_shared::neon::generated | vld3q_u32 | function | * Neon intrinsic unsafe | |
| 643 | core::core_arch::arm_shared::neon::generated | vld3q_u8 | function | * Neon intrinsic unsafe | |
| 644 | core::core_arch::arm_shared::neon::generated | vld4_dup_f16 | function | * Neon intrinsic unsafe | |
| 645 | core::core_arch::arm_shared::neon::generated | vld4_dup_f32 | function | * Neon intrinsic unsafe | |
| 646 | core::core_arch::arm_shared::neon::generated | vld4_dup_p16 | function | * Neon intrinsic unsafe | |
| 647 | core::core_arch::arm_shared::neon::generated | vld4_dup_p64 | function | * Neon intrinsic unsafe | |
| 648 | core::core_arch::arm_shared::neon::generated | vld4_dup_p8 | function | * Neon intrinsic unsafe | |
| 649 | core::core_arch::arm_shared::neon::generated | vld4_dup_s16 | function | * Neon intrinsic unsafe | |
| 650 | core::core_arch::arm_shared::neon::generated | vld4_dup_s32 | function | * Neon intrinsic unsafe | |
| 651 | core::core_arch::arm_shared::neon::generated | vld4_dup_s64 | function | * Neon intrinsic unsafe | |
| 652 | core::core_arch::arm_shared::neon::generated | vld4_dup_s8 | function | * Neon intrinsic unsafe | |
| 653 | core::core_arch::arm_shared::neon::generated | vld4_dup_u16 | function | * Neon intrinsic unsafe | |
| 654 | core::core_arch::arm_shared::neon::generated | vld4_dup_u32 | function | * Neon intrinsic unsafe | |
| 655 | core::core_arch::arm_shared::neon::generated | vld4_dup_u64 | function | * Neon intrinsic unsafe | |
| 656 | core::core_arch::arm_shared::neon::generated | vld4_dup_u8 | function | * Neon intrinsic unsafe | |
| 657 | core::core_arch::arm_shared::neon::generated | vld4_f16 | function | * Neon intrinsic unsafe | |
| 658 | core::core_arch::arm_shared::neon::generated | vld4_f32 | function | * Neon intrinsic unsafe | |
| 659 | core::core_arch::arm_shared::neon::generated | vld4_lane_f16 | function | * Neon intrinsic unsafe | |
| 660 | core::core_arch::arm_shared::neon::generated | vld4_lane_f32 | function | * Neon intrinsic unsafe | |
| 661 | core::core_arch::arm_shared::neon::generated | vld4_lane_p16 | function | * Neon intrinsic unsafe | |
| 662 | core::core_arch::arm_shared::neon::generated | vld4_lane_p8 | function | * Neon intrinsic unsafe | |
| 663 | core::core_arch::arm_shared::neon::generated | vld4_lane_s16 | function | * Neon intrinsic unsafe | |
| 664 | core::core_arch::arm_shared::neon::generated | vld4_lane_s32 | function | * Neon intrinsic unsafe | |
| 665 | core::core_arch::arm_shared::neon::generated | vld4_lane_s8 | function | * Neon intrinsic unsafe | |
| 666 | core::core_arch::arm_shared::neon::generated | vld4_lane_u16 | function | * Neon intrinsic unsafe | |
| 667 | core::core_arch::arm_shared::neon::generated | vld4_lane_u32 | function | * Neon intrinsic unsafe | |
| 668 | core::core_arch::arm_shared::neon::generated | vld4_lane_u8 | function | * Neon intrinsic unsafe | |
| 669 | core::core_arch::arm_shared::neon::generated | vld4_p16 | function | * Neon intrinsic unsafe | |
| 670 | core::core_arch::arm_shared::neon::generated | vld4_p64 | function | * Neon intrinsic unsafe | |
| 671 | core::core_arch::arm_shared::neon::generated | vld4_p8 | function | * Neon intrinsic unsafe | |
| 672 | core::core_arch::arm_shared::neon::generated | vld4_s16 | function | * Neon intrinsic unsafe | |
| 673 | core::core_arch::arm_shared::neon::generated | vld4_s32 | function | * Neon intrinsic unsafe | |
| 674 | core::core_arch::arm_shared::neon::generated | vld4_s64 | function | * Neon intrinsic unsafe | |
| 675 | core::core_arch::arm_shared::neon::generated | vld4_s8 | function | * Neon intrinsic unsafe | |
| 676 | core::core_arch::arm_shared::neon::generated | vld4_u16 | function | * Neon intrinsic unsafe | |
| 677 | core::core_arch::arm_shared::neon::generated | vld4_u32 | function | * Neon intrinsic unsafe | |
| 678 | core::core_arch::arm_shared::neon::generated | vld4_u64 | function | * Neon intrinsic unsafe | |
| 679 | core::core_arch::arm_shared::neon::generated | vld4_u8 | function | * Neon intrinsic unsafe | |
| 680 | core::core_arch::arm_shared::neon::generated | vld4q_dup_f16 | function | * Neon intrinsic unsafe | |
| 681 | core::core_arch::arm_shared::neon::generated | vld4q_dup_f32 | function | * Neon intrinsic unsafe | |
| 682 | core::core_arch::arm_shared::neon::generated | vld4q_dup_p16 | function | * Neon intrinsic unsafe | |
| 683 | core::core_arch::arm_shared::neon::generated | vld4q_dup_p8 | function | * Neon intrinsic unsafe | |
| 684 | core::core_arch::arm_shared::neon::generated | vld4q_dup_s16 | function | * Neon intrinsic unsafe | |
| 685 | core::core_arch::arm_shared::neon::generated | vld4q_dup_s32 | function | * Neon intrinsic unsafe | |
| 686 | core::core_arch::arm_shared::neon::generated | vld4q_dup_s8 | function | * Neon intrinsic unsafe | |
| 687 | core::core_arch::arm_shared::neon::generated | vld4q_dup_u16 | function | * Neon intrinsic unsafe | |
| 688 | core::core_arch::arm_shared::neon::generated | vld4q_dup_u32 | function | * Neon intrinsic unsafe | |
| 689 | core::core_arch::arm_shared::neon::generated | vld4q_dup_u8 | function | * Neon intrinsic unsafe | |
| 690 | core::core_arch::arm_shared::neon::generated | vld4q_f16 | function | * Neon intrinsic unsafe | |
| 691 | core::core_arch::arm_shared::neon::generated | vld4q_f32 | function | * Neon intrinsic unsafe | |
| 692 | core::core_arch::arm_shared::neon::generated | vld4q_lane_f16 | function | * Neon intrinsic unsafe | |
| 693 | core::core_arch::arm_shared::neon::generated | vld4q_lane_f32 | function | * Neon intrinsic unsafe | |
| 694 | core::core_arch::arm_shared::neon::generated | vld4q_lane_p16 | function | * Neon intrinsic unsafe | |
| 695 | core::core_arch::arm_shared::neon::generated | vld4q_lane_s16 | function | * Neon intrinsic unsafe | |
| 696 | core::core_arch::arm_shared::neon::generated | vld4q_lane_s32 | function | * Neon intrinsic unsafe | |
| 697 | core::core_arch::arm_shared::neon::generated | vld4q_lane_u16 | function | * Neon intrinsic unsafe | |
| 698 | core::core_arch::arm_shared::neon::generated | vld4q_lane_u32 | function | * Neon intrinsic unsafe | |
| 699 | core::core_arch::arm_shared::neon::generated | vld4q_p16 | function | * Neon intrinsic unsafe | |
| 700 | core::core_arch::arm_shared::neon::generated | vld4q_p8 | function | * Neon intrinsic unsafe | |
| 701 | core::core_arch::arm_shared::neon::generated | vld4q_s16 | function | * Neon intrinsic unsafe | |
| 702 | core::core_arch::arm_shared::neon::generated | vld4q_s32 | function | * Neon intrinsic unsafe | |
| 703 | core::core_arch::arm_shared::neon::generated | vld4q_s8 | function | * Neon intrinsic unsafe | |
| 704 | core::core_arch::arm_shared::neon::generated | vld4q_u16 | function | * Neon intrinsic unsafe | |
| 705 | core::core_arch::arm_shared::neon::generated | vld4q_u32 | function | * Neon intrinsic unsafe | |
| 706 | core::core_arch::arm_shared::neon::generated | vld4q_u8 | function | * Neon intrinsic unsafe | |
| 707 | core::core_arch::arm_shared::neon::generated | vldrq_p128 | function | * Neon intrinsic unsafe | |
| 708 | core::core_arch::arm_shared::neon::generated | vst1_f16_x2 | function | * Neon intrinsic unsafe | |
| 709 | core::core_arch::arm_shared::neon::generated | vst1_f16_x3 | function | * Neon intrinsic unsafe | |
| 710 | core::core_arch::arm_shared::neon::generated | vst1_f16_x4 | function | * Neon intrinsic unsafe | |
| 711 | core::core_arch::arm_shared::neon::generated | vst1_f32_x2 | function | * Neon intrinsic unsafe | |
| 712 | core::core_arch::arm_shared::neon::generated | vst1_f32_x3 | function | * Neon intrinsic unsafe | |
| 713 | core::core_arch::arm_shared::neon::generated | vst1_f32_x4 | function | * Neon intrinsic unsafe | |
| 714 | core::core_arch::arm_shared::neon::generated | vst1_lane_f16 | function | * Neon intrinsic unsafe | |
| 715 | core::core_arch::arm_shared::neon::generated | vst1_lane_f32 | function | * Neon intrinsic unsafe | |
| 716 | core::core_arch::arm_shared::neon::generated | vst1_lane_p16 | function | * Neon intrinsic unsafe | |
| 717 | core::core_arch::arm_shared::neon::generated | vst1_lane_p64 | function | * Neon intrinsic unsafe | |
| 718 | core::core_arch::arm_shared::neon::generated | vst1_lane_p8 | function | * Neon intrinsic unsafe | |
| 719 | core::core_arch::arm_shared::neon::generated | vst1_lane_s16 | function | * Neon intrinsic unsafe | |
| 720 | core::core_arch::arm_shared::neon::generated | vst1_lane_s32 | function | * Neon intrinsic unsafe | |
| 721 | core::core_arch::arm_shared::neon::generated | vst1_lane_s64 | function | * Neon intrinsic unsafe | |
| 722 | core::core_arch::arm_shared::neon::generated | vst1_lane_s8 | function | * Neon intrinsic unsafe | |
| 723 | core::core_arch::arm_shared::neon::generated | vst1_lane_u16 | function | * Neon intrinsic unsafe | |
| 724 | core::core_arch::arm_shared::neon::generated | vst1_lane_u32 | function | * Neon intrinsic unsafe | |
| 725 | core::core_arch::arm_shared::neon::generated | vst1_lane_u64 | function | * Neon intrinsic unsafe | |
| 726 | core::core_arch::arm_shared::neon::generated | vst1_lane_u8 | function | * Neon intrinsic unsafe | |
| 727 | core::core_arch::arm_shared::neon::generated | vst1_p16_x2 | function | * Neon intrinsic unsafe | |
| 728 | core::core_arch::arm_shared::neon::generated | vst1_p16_x3 | function | * Neon intrinsic unsafe | |
| 729 | core::core_arch::arm_shared::neon::generated | vst1_p16_x4 | function | * Neon intrinsic unsafe | |
| 730 | core::core_arch::arm_shared::neon::generated | vst1_p64_x2 | function | * Neon intrinsic unsafe | |
| 731 | core::core_arch::arm_shared::neon::generated | vst1_p64_x3 | function | * Neon intrinsic unsafe | |
| 732 | core::core_arch::arm_shared::neon::generated | vst1_p64_x4 | function | * Neon intrinsic unsafe | |
| 733 | core::core_arch::arm_shared::neon::generated | vst1_p8_x2 | function | * Neon intrinsic unsafe | |
| 734 | core::core_arch::arm_shared::neon::generated | vst1_p8_x3 | function | * Neon intrinsic unsafe | |
| 735 | core::core_arch::arm_shared::neon::generated | vst1_p8_x4 | function | * Neon intrinsic unsafe | |
| 736 | core::core_arch::arm_shared::neon::generated | vst1_s16_x2 | function | * Neon intrinsic unsafe | |
| 737 | core::core_arch::arm_shared::neon::generated | vst1_s16_x3 | function | * Neon intrinsic unsafe | |
| 738 | core::core_arch::arm_shared::neon::generated | vst1_s16_x4 | function | * Neon intrinsic unsafe | |
| 739 | core::core_arch::arm_shared::neon::generated | vst1_s32_x2 | function | * Neon intrinsic unsafe | |
| 740 | core::core_arch::arm_shared::neon::generated | vst1_s32_x3 | function | * Neon intrinsic unsafe | |
| 741 | core::core_arch::arm_shared::neon::generated | vst1_s32_x4 | function | * Neon intrinsic unsafe | |
| 742 | core::core_arch::arm_shared::neon::generated | vst1_s64_x2 | function | * Neon intrinsic unsafe | |
| 743 | core::core_arch::arm_shared::neon::generated | vst1_s64_x3 | function | * Neon intrinsic unsafe | |
| 744 | core::core_arch::arm_shared::neon::generated | vst1_s64_x4 | function | * Neon intrinsic unsafe | |
| 745 | core::core_arch::arm_shared::neon::generated | vst1_s8_x2 | function | * Neon intrinsic unsafe | |
| 746 | core::core_arch::arm_shared::neon::generated | vst1_s8_x3 | function | * Neon intrinsic unsafe | |
| 747 | core::core_arch::arm_shared::neon::generated | vst1_s8_x4 | function | * Neon intrinsic unsafe | |
| 748 | core::core_arch::arm_shared::neon::generated | vst1_u16_x2 | function | * Neon intrinsic unsafe | |
| 749 | core::core_arch::arm_shared::neon::generated | vst1_u16_x3 | function | * Neon intrinsic unsafe | |
| 750 | core::core_arch::arm_shared::neon::generated | vst1_u16_x4 | function | * Neon intrinsic unsafe | |
| 751 | core::core_arch::arm_shared::neon::generated | vst1_u32_x2 | function | * Neon intrinsic unsafe | |
| 752 | core::core_arch::arm_shared::neon::generated | vst1_u32_x3 | function | * Neon intrinsic unsafe | |
| 753 | core::core_arch::arm_shared::neon::generated | vst1_u32_x4 | function | * Neon intrinsic unsafe | |
| 754 | core::core_arch::arm_shared::neon::generated | vst1_u64_x2 | function | * Neon intrinsic unsafe | |
| 755 | core::core_arch::arm_shared::neon::generated | vst1_u64_x3 | function | * Neon intrinsic unsafe | |
| 756 | core::core_arch::arm_shared::neon::generated | vst1_u64_x4 | function | * Neon intrinsic unsafe | |
| 757 | core::core_arch::arm_shared::neon::generated | vst1_u8_x2 | function | * Neon intrinsic unsafe | |
| 758 | core::core_arch::arm_shared::neon::generated | vst1_u8_x3 | function | * Neon intrinsic unsafe | |
| 759 | core::core_arch::arm_shared::neon::generated | vst1_u8_x4 | function | * Neon intrinsic unsafe | |
| 760 | core::core_arch::arm_shared::neon::generated | vst1q_f16_x2 | function | * Neon intrinsic unsafe | |
| 761 | core::core_arch::arm_shared::neon::generated | vst1q_f16_x3 | function | * Neon intrinsic unsafe | |
| 762 | core::core_arch::arm_shared::neon::generated | vst1q_f16_x4 | function | * Neon intrinsic unsafe | |
| 763 | core::core_arch::arm_shared::neon::generated | vst1q_f32_x2 | function | * Neon intrinsic unsafe | |
| 764 | core::core_arch::arm_shared::neon::generated | vst1q_f32_x3 | function | * Neon intrinsic unsafe | |
| 765 | core::core_arch::arm_shared::neon::generated | vst1q_f32_x4 | function | * Neon intrinsic unsafe | |
| 766 | core::core_arch::arm_shared::neon::generated | vst1q_lane_f16 | function | * Neon intrinsic unsafe | |
| 767 | core::core_arch::arm_shared::neon::generated | vst1q_lane_f32 | function | * Neon intrinsic unsafe | |
| 768 | core::core_arch::arm_shared::neon::generated | vst1q_lane_p16 | function | * Neon intrinsic unsafe | |
| 769 | core::core_arch::arm_shared::neon::generated | vst1q_lane_p64 | function | * Neon intrinsic unsafe | |
| 770 | core::core_arch::arm_shared::neon::generated | vst1q_lane_p8 | function | * Neon intrinsic unsafe | |
| 771 | core::core_arch::arm_shared::neon::generated | vst1q_lane_s16 | function | * Neon intrinsic unsafe | |
| 772 | core::core_arch::arm_shared::neon::generated | vst1q_lane_s32 | function | * Neon intrinsic unsafe | |
| 773 | core::core_arch::arm_shared::neon::generated | vst1q_lane_s64 | function | * Neon intrinsic unsafe | |
| 774 | core::core_arch::arm_shared::neon::generated | vst1q_lane_s8 | function | * Neon intrinsic unsafe | |
| 775 | core::core_arch::arm_shared::neon::generated | vst1q_lane_u16 | function | * Neon intrinsic unsafe | |
| 776 | core::core_arch::arm_shared::neon::generated | vst1q_lane_u32 | function | * Neon intrinsic unsafe | |
| 777 | core::core_arch::arm_shared::neon::generated | vst1q_lane_u64 | function | * Neon intrinsic unsafe | |
| 778 | core::core_arch::arm_shared::neon::generated | vst1q_lane_u8 | function | * Neon intrinsic unsafe | |
| 779 | core::core_arch::arm_shared::neon::generated | vst1q_p16_x2 | function | * Neon intrinsic unsafe | |
| 780 | core::core_arch::arm_shared::neon::generated | vst1q_p16_x3 | function | * Neon intrinsic unsafe | |
| 781 | core::core_arch::arm_shared::neon::generated | vst1q_p16_x4 | function | * Neon intrinsic unsafe | |
| 782 | core::core_arch::arm_shared::neon::generated | vst1q_p64_x2 | function | * Neon intrinsic unsafe | |
| 783 | core::core_arch::arm_shared::neon::generated | vst1q_p64_x3 | function | * Neon intrinsic unsafe | |
| 784 | core::core_arch::arm_shared::neon::generated | vst1q_p64_x4 | function | * Neon intrinsic unsafe | |
| 785 | core::core_arch::arm_shared::neon::generated | vst1q_p8_x2 | function | * Neon intrinsic unsafe | |
| 786 | core::core_arch::arm_shared::neon::generated | vst1q_p8_x3 | function | * Neon intrinsic unsafe | |
| 787 | core::core_arch::arm_shared::neon::generated | vst1q_p8_x4 | function | * Neon intrinsic unsafe | |
| 788 | core::core_arch::arm_shared::neon::generated | vst1q_s16_x2 | function | * Neon intrinsic unsafe | |
| 789 | core::core_arch::arm_shared::neon::generated | vst1q_s16_x3 | function | * Neon intrinsic unsafe | |
| 790 | core::core_arch::arm_shared::neon::generated | vst1q_s16_x4 | function | * Neon intrinsic unsafe | |
| 791 | core::core_arch::arm_shared::neon::generated | vst1q_s32_x2 | function | * Neon intrinsic unsafe | |
| 792 | core::core_arch::arm_shared::neon::generated | vst1q_s32_x3 | function | * Neon intrinsic unsafe | |
| 793 | core::core_arch::arm_shared::neon::generated | vst1q_s32_x4 | function | * Neon intrinsic unsafe | |
| 794 | core::core_arch::arm_shared::neon::generated | vst1q_s64_x2 | function | * Neon intrinsic unsafe | |
| 795 | core::core_arch::arm_shared::neon::generated | vst1q_s64_x3 | function | * Neon intrinsic unsafe | |
| 796 | core::core_arch::arm_shared::neon::generated | vst1q_s64_x4 | function | * Neon intrinsic unsafe | |
| 797 | core::core_arch::arm_shared::neon::generated | vst1q_s8_x2 | function | * Neon intrinsic unsafe | |
| 798 | core::core_arch::arm_shared::neon::generated | vst1q_s8_x3 | function | * Neon intrinsic unsafe | |
| 799 | core::core_arch::arm_shared::neon::generated | vst1q_s8_x4 | function | * Neon intrinsic unsafe | |
| 800 | core::core_arch::arm_shared::neon::generated | vst1q_u16_x2 | function | * Neon intrinsic unsafe | |
| 801 | core::core_arch::arm_shared::neon::generated | vst1q_u16_x3 | function | * Neon intrinsic unsafe | |
| 802 | core::core_arch::arm_shared::neon::generated | vst1q_u16_x4 | function | * Neon intrinsic unsafe | |
| 803 | core::core_arch::arm_shared::neon::generated | vst1q_u32_x2 | function | * Neon intrinsic unsafe | |
| 804 | core::core_arch::arm_shared::neon::generated | vst1q_u32_x3 | function | * Neon intrinsic unsafe | |
| 805 | core::core_arch::arm_shared::neon::generated | vst1q_u32_x4 | function | * Neon intrinsic unsafe | |
| 806 | core::core_arch::arm_shared::neon::generated | vst1q_u64_x2 | function | * Neon intrinsic unsafe | |
| 807 | core::core_arch::arm_shared::neon::generated | vst1q_u64_x3 | function | * Neon intrinsic unsafe | |
| 808 | core::core_arch::arm_shared::neon::generated | vst1q_u64_x4 | function | * Neon intrinsic unsafe | |
| 809 | core::core_arch::arm_shared::neon::generated | vst1q_u8_x2 | function | * Neon intrinsic unsafe | |
| 810 | core::core_arch::arm_shared::neon::generated | vst1q_u8_x3 | function | * Neon intrinsic unsafe | |
| 811 | core::core_arch::arm_shared::neon::generated | vst1q_u8_x4 | function | * Neon intrinsic unsafe | |
| 812 | core::core_arch::arm_shared::neon::generated | vst2_f16 | function | * Neon intrinsic unsafe | |
| 813 | core::core_arch::arm_shared::neon::generated | vst2_f32 | function | * Neon intrinsic unsafe | |
| 814 | core::core_arch::arm_shared::neon::generated | vst2_lane_f16 | function | * Neon intrinsic unsafe | |
| 815 | core::core_arch::arm_shared::neon::generated | vst2_lane_f32 | function | * Neon intrinsic unsafe | |
| 816 | core::core_arch::arm_shared::neon::generated | vst2_lane_p16 | function | * Neon intrinsic unsafe | |
| 817 | core::core_arch::arm_shared::neon::generated | vst2_lane_p8 | function | * Neon intrinsic unsafe | |
| 818 | core::core_arch::arm_shared::neon::generated | vst2_lane_s16 | function | * Neon intrinsic unsafe | |
| 819 | core::core_arch::arm_shared::neon::generated | vst2_lane_s32 | function | * Neon intrinsic unsafe | |
| 820 | core::core_arch::arm_shared::neon::generated | vst2_lane_s8 | function | * Neon intrinsic unsafe | |
| 821 | core::core_arch::arm_shared::neon::generated | vst2_lane_u16 | function | * Neon intrinsic unsafe | |
| 822 | core::core_arch::arm_shared::neon::generated | vst2_lane_u32 | function | * Neon intrinsic unsafe | |
| 823 | core::core_arch::arm_shared::neon::generated | vst2_lane_u8 | function | * Neon intrinsic unsafe | |
| 824 | core::core_arch::arm_shared::neon::generated | vst2_p16 | function | * Neon intrinsic unsafe | |
| 825 | core::core_arch::arm_shared::neon::generated | vst2_p64 | function | * Neon intrinsic unsafe | |
| 826 | core::core_arch::arm_shared::neon::generated | vst2_p8 | function | * Neon intrinsic unsafe | |
| 827 | core::core_arch::arm_shared::neon::generated | vst2_s16 | function | * Neon intrinsic unsafe | |
| 828 | core::core_arch::arm_shared::neon::generated | vst2_s32 | function | * Neon intrinsic unsafe | |
| 829 | core::core_arch::arm_shared::neon::generated | vst2_s64 | function | * Neon intrinsic unsafe | |
| 830 | core::core_arch::arm_shared::neon::generated | vst2_s8 | function | * Neon intrinsic unsafe | |
| 831 | core::core_arch::arm_shared::neon::generated | vst2_u16 | function | * Neon intrinsic unsafe | |
| 832 | core::core_arch::arm_shared::neon::generated | vst2_u32 | function | * Neon intrinsic unsafe | |
| 833 | core::core_arch::arm_shared::neon::generated | vst2_u64 | function | * Neon intrinsic unsafe | |
| 834 | core::core_arch::arm_shared::neon::generated | vst2_u8 | function | * Neon intrinsic unsafe | |
| 835 | core::core_arch::arm_shared::neon::generated | vst2q_f16 | function | * Neon intrinsic unsafe | |
| 836 | core::core_arch::arm_shared::neon::generated | vst2q_f32 | function | * Neon intrinsic unsafe | |
| 837 | core::core_arch::arm_shared::neon::generated | vst2q_lane_f16 | function | * Neon intrinsic unsafe | |
| 838 | core::core_arch::arm_shared::neon::generated | vst2q_lane_f32 | function | * Neon intrinsic unsafe | |
| 839 | core::core_arch::arm_shared::neon::generated | vst2q_lane_p16 | function | * Neon intrinsic unsafe | |
| 840 | core::core_arch::arm_shared::neon::generated | vst2q_lane_s16 | function | * Neon intrinsic unsafe | |
| 841 | core::core_arch::arm_shared::neon::generated | vst2q_lane_s32 | function | * Neon intrinsic unsafe | |
| 842 | core::core_arch::arm_shared::neon::generated | vst2q_lane_u16 | function | * Neon intrinsic unsafe | |
| 843 | core::core_arch::arm_shared::neon::generated | vst2q_lane_u32 | function | * Neon intrinsic unsafe | |
| 844 | core::core_arch::arm_shared::neon::generated | vst2q_p16 | function | * Neon intrinsic unsafe | |
| 845 | core::core_arch::arm_shared::neon::generated | vst2q_p8 | function | * Neon intrinsic unsafe | |
| 846 | core::core_arch::arm_shared::neon::generated | vst2q_s16 | function | * Neon intrinsic unsafe | |
| 847 | core::core_arch::arm_shared::neon::generated | vst2q_s32 | function | * Neon intrinsic unsafe | |
| 848 | core::core_arch::arm_shared::neon::generated | vst2q_s8 | function | * Neon intrinsic unsafe | |
| 849 | core::core_arch::arm_shared::neon::generated | vst2q_u16 | function | * Neon intrinsic unsafe | |
| 850 | core::core_arch::arm_shared::neon::generated | vst2q_u32 | function | * Neon intrinsic unsafe | |
| 851 | core::core_arch::arm_shared::neon::generated | vst2q_u8 | function | * Neon intrinsic unsafe | |
| 852 | core::core_arch::arm_shared::neon::generated | vst3_f16 | function | * Neon intrinsic unsafe | |
| 853 | core::core_arch::arm_shared::neon::generated | vst3_f32 | function | * Neon intrinsic unsafe | |
| 854 | core::core_arch::arm_shared::neon::generated | vst3_lane_f16 | function | * Neon intrinsic unsafe | |
| 855 | core::core_arch::arm_shared::neon::generated | vst3_lane_f32 | function | * Neon intrinsic unsafe | |
| 856 | core::core_arch::arm_shared::neon::generated | vst3_lane_p16 | function | * Neon intrinsic unsafe | |
| 857 | core::core_arch::arm_shared::neon::generated | vst3_lane_p8 | function | * Neon intrinsic unsafe | |
| 858 | core::core_arch::arm_shared::neon::generated | vst3_lane_s16 | function | * Neon intrinsic unsafe | |
| 859 | core::core_arch::arm_shared::neon::generated | vst3_lane_s32 | function | * Neon intrinsic unsafe | |
| 860 | core::core_arch::arm_shared::neon::generated | vst3_lane_s8 | function | * Neon intrinsic unsafe | |
| 861 | core::core_arch::arm_shared::neon::generated | vst3_lane_u16 | function | * Neon intrinsic unsafe | |
| 862 | core::core_arch::arm_shared::neon::generated | vst3_lane_u32 | function | * Neon intrinsic unsafe | |
| 863 | core::core_arch::arm_shared::neon::generated | vst3_lane_u8 | function | * Neon intrinsic unsafe | |
| 864 | core::core_arch::arm_shared::neon::generated | vst3_p16 | function | * Neon intrinsic unsafe | |
| 865 | core::core_arch::arm_shared::neon::generated | vst3_p64 | function | * Neon intrinsic unsafe | |
| 866 | core::core_arch::arm_shared::neon::generated | vst3_p8 | function | * Neon intrinsic unsafe | |
| 867 | core::core_arch::arm_shared::neon::generated | vst3_s16 | function | * Neon intrinsic unsafe | |
| 868 | core::core_arch::arm_shared::neon::generated | vst3_s32 | function | * Neon intrinsic unsafe | |
| 869 | core::core_arch::arm_shared::neon::generated | vst3_s64 | function | * Neon intrinsic unsafe | |
| 870 | core::core_arch::arm_shared::neon::generated | vst3_s8 | function | * Neon intrinsic unsafe | |
| 871 | core::core_arch::arm_shared::neon::generated | vst3_u16 | function | * Neon intrinsic unsafe | |
| 872 | core::core_arch::arm_shared::neon::generated | vst3_u32 | function | * Neon intrinsic unsafe | |
| 873 | core::core_arch::arm_shared::neon::generated | vst3_u64 | function | * Neon intrinsic unsafe | |
| 874 | core::core_arch::arm_shared::neon::generated | vst3_u8 | function | * Neon intrinsic unsafe | |
| 875 | core::core_arch::arm_shared::neon::generated | vst3q_f16 | function | * Neon intrinsic unsafe | |
| 876 | core::core_arch::arm_shared::neon::generated | vst3q_f32 | function | * Neon intrinsic unsafe | |
| 877 | core::core_arch::arm_shared::neon::generated | vst3q_lane_f16 | function | * Neon intrinsic unsafe | |
| 878 | core::core_arch::arm_shared::neon::generated | vst3q_lane_f32 | function | * Neon intrinsic unsafe | |
| 879 | core::core_arch::arm_shared::neon::generated | vst3q_lane_p16 | function | * Neon intrinsic unsafe | |
| 880 | core::core_arch::arm_shared::neon::generated | vst3q_lane_s16 | function | * Neon intrinsic unsafe | |
| 881 | core::core_arch::arm_shared::neon::generated | vst3q_lane_s32 | function | * Neon intrinsic unsafe | |
| 882 | core::core_arch::arm_shared::neon::generated | vst3q_lane_u16 | function | * Neon intrinsic unsafe | |
| 883 | core::core_arch::arm_shared::neon::generated | vst3q_lane_u32 | function | * Neon intrinsic unsafe | |
| 884 | core::core_arch::arm_shared::neon::generated | vst3q_p16 | function | * Neon intrinsic unsafe | |
| 885 | core::core_arch::arm_shared::neon::generated | vst3q_p8 | function | * Neon intrinsic unsafe | |
| 886 | core::core_arch::arm_shared::neon::generated | vst3q_s16 | function | * Neon intrinsic unsafe | |
| 887 | core::core_arch::arm_shared::neon::generated | vst3q_s32 | function | * Neon intrinsic unsafe | |
| 888 | core::core_arch::arm_shared::neon::generated | vst3q_s8 | function | * Neon intrinsic unsafe | |
| 889 | core::core_arch::arm_shared::neon::generated | vst3q_u16 | function | * Neon intrinsic unsafe | |
| 890 | core::core_arch::arm_shared::neon::generated | vst3q_u32 | function | * Neon intrinsic unsafe | |
| 891 | core::core_arch::arm_shared::neon::generated | vst3q_u8 | function | * Neon intrinsic unsafe | |
| 892 | core::core_arch::arm_shared::neon::generated | vst4_f16 | function | * Neon intrinsic unsafe | |
| 893 | core::core_arch::arm_shared::neon::generated | vst4_f32 | function | * Neon intrinsic unsafe | |
| 894 | core::core_arch::arm_shared::neon::generated | vst4_lane_f16 | function | * Neon intrinsic unsafe | |
| 895 | core::core_arch::arm_shared::neon::generated | vst4_lane_f32 | function | * Neon intrinsic unsafe | |
| 896 | core::core_arch::arm_shared::neon::generated | vst4_lane_p16 | function | * Neon intrinsic unsafe | |
| 897 | core::core_arch::arm_shared::neon::generated | vst4_lane_p8 | function | * Neon intrinsic unsafe | |
| 898 | core::core_arch::arm_shared::neon::generated | vst4_lane_s16 | function | * Neon intrinsic unsafe | |
| 899 | core::core_arch::arm_shared::neon::generated | vst4_lane_s32 | function | * Neon intrinsic unsafe | |
| 900 | core::core_arch::arm_shared::neon::generated | vst4_lane_s8 | function | * Neon intrinsic unsafe | |
| 901 | core::core_arch::arm_shared::neon::generated | vst4_lane_u16 | function | * Neon intrinsic unsafe | |
| 902 | core::core_arch::arm_shared::neon::generated | vst4_lane_u32 | function | * Neon intrinsic unsafe | |
| 903 | core::core_arch::arm_shared::neon::generated | vst4_lane_u8 | function | * Neon intrinsic unsafe | |
| 904 | core::core_arch::arm_shared::neon::generated | vst4_p16 | function | * Neon intrinsic unsafe | |
| 905 | core::core_arch::arm_shared::neon::generated | vst4_p64 | function | * Neon intrinsic unsafe | |
| 906 | core::core_arch::arm_shared::neon::generated | vst4_p8 | function | * Neon intrinsic unsafe | |
| 907 | core::core_arch::arm_shared::neon::generated | vst4_s16 | function | * Neon intrinsic unsafe | |
| 908 | core::core_arch::arm_shared::neon::generated | vst4_s32 | function | * Neon intrinsic unsafe | |
| 909 | core::core_arch::arm_shared::neon::generated | vst4_s64 | function | * Neon intrinsic unsafe | |
| 910 | core::core_arch::arm_shared::neon::generated | vst4_s8 | function | * Neon intrinsic unsafe | |
| 911 | core::core_arch::arm_shared::neon::generated | vst4_u16 | function | * Neon intrinsic unsafe | |
| 912 | core::core_arch::arm_shared::neon::generated | vst4_u32 | function | * Neon intrinsic unsafe | |
| 913 | core::core_arch::arm_shared::neon::generated | vst4_u64 | function | * Neon intrinsic unsafe | |
| 914 | core::core_arch::arm_shared::neon::generated | vst4_u8 | function | * Neon intrinsic unsafe | |
| 915 | core::core_arch::arm_shared::neon::generated | vst4q_f16 | function | * Neon intrinsic unsafe | |
| 916 | core::core_arch::arm_shared::neon::generated | vst4q_f32 | function | * Neon intrinsic unsafe | |
| 917 | core::core_arch::arm_shared::neon::generated | vst4q_lane_f16 | function | * Neon intrinsic unsafe | |
| 918 | core::core_arch::arm_shared::neon::generated | vst4q_lane_f32 | function | * Neon intrinsic unsafe | |
| 919 | core::core_arch::arm_shared::neon::generated | vst4q_lane_p16 | function | * Neon intrinsic unsafe | |
| 920 | core::core_arch::arm_shared::neon::generated | vst4q_lane_s16 | function | * Neon intrinsic unsafe | |
| 921 | core::core_arch::arm_shared::neon::generated | vst4q_lane_s32 | function | * Neon intrinsic unsafe | |
| 922 | core::core_arch::arm_shared::neon::generated | vst4q_lane_u16 | function | * Neon intrinsic unsafe | |
| 923 | core::core_arch::arm_shared::neon::generated | vst4q_lane_u32 | function | * Neon intrinsic unsafe | |
| 924 | core::core_arch::arm_shared::neon::generated | vst4q_p16 | function | * Neon intrinsic unsafe | |
| 925 | core::core_arch::arm_shared::neon::generated | vst4q_p8 | function | * Neon intrinsic unsafe | |
| 926 | core::core_arch::arm_shared::neon::generated | vst4q_s16 | function | * Neon intrinsic unsafe | |
| 927 | core::core_arch::arm_shared::neon::generated | vst4q_s32 | function | * Neon intrinsic unsafe | |
| 928 | core::core_arch::arm_shared::neon::generated | vst4q_s8 | function | * Neon intrinsic unsafe | |
| 929 | core::core_arch::arm_shared::neon::generated | vst4q_u16 | function | * Neon intrinsic unsafe | |
| 930 | core::core_arch::arm_shared::neon::generated | vst4q_u32 | function | * Neon intrinsic unsafe | |
| 931 | core::core_arch::arm_shared::neon::generated | vst4q_u8 | function | * Neon intrinsic unsafe | |
| 932 | core::core_arch::arm_shared::neon::generated | vstrq_p128 | function | * Neon intrinsic unsafe | |
| 933 | core::core_arch::hexagon::v128 | q6_q_and_qq | function | ||
| 934 | core::core_arch::hexagon::v128 | q6_q_and_qqn | function | ||
| 935 | core::core_arch::hexagon::v128 | q6_q_not_q | function | ||
| 936 | core::core_arch::hexagon::v128 | q6_q_or_qq | function | ||
| 937 | core::core_arch::hexagon::v128 | q6_q_or_qqn | function | ||
| 938 | core::core_arch::hexagon::v128 | q6_q_vand_vr | function | ||
| 939 | core::core_arch::hexagon::v128 | q6_q_vandor_qvr | function | ||
| 940 | core::core_arch::hexagon::v128 | q6_q_vcmp_eq_vbvb | function | ||
| 941 | core::core_arch::hexagon::v128 | q6_q_vcmp_eq_vhvh | function | ||
| 942 | core::core_arch::hexagon::v128 | q6_q_vcmp_eq_vwvw | function | ||
| 943 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqand_qvbvb | function | ||
| 944 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqand_qvhvh | function | ||
| 945 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqand_qvwvw | function | ||
| 946 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqor_qvbvb | function | ||
| 947 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqor_qvhvh | function | ||
| 948 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqor_qvwvw | function | ||
| 949 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqxacc_qvbvb | function | ||
| 950 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqxacc_qvhvh | function | ||
| 951 | core::core_arch::hexagon::v128 | q6_q_vcmp_eqxacc_qvwvw | function | ||
| 952 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vbvb | function | ||
| 953 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vhfvhf | function | ||
| 954 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vhvh | function | ||
| 955 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vsfvsf | function | ||
| 956 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vubvub | function | ||
| 957 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vuhvuh | function | ||
| 958 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vuwvuw | function | ||
| 959 | core::core_arch::hexagon::v128 | q6_q_vcmp_gt_vwvw | function | ||
| 960 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvbvb | function | ||
| 961 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvhfvhf | function | ||
| 962 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvhvh | function | ||
| 963 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvsfvsf | function | ||
| 964 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvubvub | function | ||
| 965 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvuhvuh | function | ||
| 966 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvuwvuw | function | ||
| 967 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtand_qvwvw | function | ||
| 968 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvbvb | function | ||
| 969 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvhfvhf | function | ||
| 970 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvhvh | function | ||
| 971 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvsfvsf | function | ||
| 972 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvubvub | function | ||
| 973 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvuhvuh | function | ||
| 974 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvuwvuw | function | ||
| 975 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtor_qvwvw | function | ||
| 976 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvbvb | function | ||
| 977 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvhfvhf | function | ||
| 978 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvhvh | function | ||
| 979 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvsfvsf | function | ||
| 980 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvubvub | function | ||
| 981 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvuhvuh | function | ||
| 982 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvuwvuw | function | ||
| 983 | core::core_arch::hexagon::v128 | q6_q_vcmp_gtxacc_qvwvw | function | ||
| 984 | core::core_arch::hexagon::v128 | q6_q_vsetq2_r | function | ||
| 985 | core::core_arch::hexagon::v128 | q6_q_vsetq_r | function | ||
| 986 | core::core_arch::hexagon::v128 | q6_q_xor_qq | function | ||
| 987 | core::core_arch::hexagon::v128 | q6_qb_vshuffe_qhqh | function | ||
| 988 | core::core_arch::hexagon::v128 | q6_qh_vshuffe_qwqw | function | ||
| 989 | core::core_arch::hexagon::v128 | q6_r_vextract_vr | function | ||
| 990 | core::core_arch::hexagon::v128 | q6_v_equals_v | function | ||
| 991 | core::core_arch::hexagon::v128 | q6_v_hi_w | function | ||
| 992 | core::core_arch::hexagon::v128 | q6_v_lo_w | function | ||
| 993 | core::core_arch::hexagon::v128 | q6_v_vabs_v | function | ||
| 994 | core::core_arch::hexagon::v128 | q6_v_valign_vvi | function | ||
| 995 | core::core_arch::hexagon::v128 | q6_v_valign_vvr | function | ||
| 996 | core::core_arch::hexagon::v128 | q6_v_vand_qnr | function | ||
| 997 | core::core_arch::hexagon::v128 | q6_v_vand_qnv | function | ||
| 998 | core::core_arch::hexagon::v128 | q6_v_vand_qr | function | ||
| 999 | core::core_arch::hexagon::v128 | q6_v_vand_qv | function | ||
| 1000 | core::core_arch::hexagon::v128 | q6_v_vand_vv | function | ||
| 1001 | core::core_arch::hexagon::v128 | q6_v_vandor_vqnr | function | ||
| 1002 | core::core_arch::hexagon::v128 | q6_v_vandor_vqr | function | ||
| 1003 | core::core_arch::hexagon::v128 | q6_v_vdelta_vv | function | ||
| 1004 | core::core_arch::hexagon::v128 | q6_v_vfmax_vv | function | ||
| 1005 | core::core_arch::hexagon::v128 | q6_v_vfmin_vv | function | ||
| 1006 | core::core_arch::hexagon::v128 | q6_v_vfneg_v | function | ||
| 1007 | core::core_arch::hexagon::v128 | q6_v_vgetqfext_vr | function | ||
| 1008 | core::core_arch::hexagon::v128 | q6_v_vlalign_vvi | function | ||
| 1009 | core::core_arch::hexagon::v128 | q6_v_vlalign_vvr | function | ||
| 1010 | core::core_arch::hexagon::v128 | q6_v_vmux_qvv | function | ||
| 1011 | core::core_arch::hexagon::v128 | q6_v_vnot_v | function | ||
| 1012 | core::core_arch::hexagon::v128 | q6_v_vor_vv | function | ||
| 1013 | core::core_arch::hexagon::v128 | q6_v_vrdelta_vv | function | ||
| 1014 | core::core_arch::hexagon::v128 | q6_v_vror_vr | function | ||
| 1015 | core::core_arch::hexagon::v128 | q6_v_vsetqfext_vr | function | ||
| 1016 | core::core_arch::hexagon::v128 | q6_v_vsplat_r | function | ||
| 1017 | core::core_arch::hexagon::v128 | q6_v_vxor_vv | function | ||
| 1018 | core::core_arch::hexagon::v128 | q6_v_vzero | function | ||
| 1019 | core::core_arch::hexagon::v128 | q6_vb_condacc_qnvbvb | function | ||
| 1020 | core::core_arch::hexagon::v128 | q6_vb_condacc_qvbvb | function | ||
| 1021 | core::core_arch::hexagon::v128 | q6_vb_condnac_qnvbvb | function | ||
| 1022 | core::core_arch::hexagon::v128 | q6_vb_condnac_qvbvb | function | ||
| 1023 | core::core_arch::hexagon::v128 | q6_vb_prefixsum_q | function | ||
| 1024 | core::core_arch::hexagon::v128 | q6_vb_vabs_vb | function | ||
| 1025 | core::core_arch::hexagon::v128 | q6_vb_vabs_vb_sat | function | ||
| 1026 | core::core_arch::hexagon::v128 | q6_vb_vadd_vbvb | function | ||
| 1027 | core::core_arch::hexagon::v128 | q6_vb_vadd_vbvb_sat | function | ||
| 1028 | core::core_arch::hexagon::v128 | q6_vb_vasr_vhvhr_rnd_sat | function | ||
| 1029 | core::core_arch::hexagon::v128 | q6_vb_vasr_vhvhr_sat | function | ||
| 1030 | core::core_arch::hexagon::v128 | q6_vb_vavg_vbvb | function | ||
| 1031 | core::core_arch::hexagon::v128 | q6_vb_vavg_vbvb_rnd | function | ||
| 1032 | core::core_arch::hexagon::v128 | q6_vb_vcvt_vhfvhf | function | ||
| 1033 | core::core_arch::hexagon::v128 | q6_vb_vdeal_vb | function | ||
| 1034 | core::core_arch::hexagon::v128 | q6_vb_vdeale_vbvb | function | ||
| 1035 | core::core_arch::hexagon::v128 | q6_vb_vlut32_vbvbi | function | ||
| 1036 | core::core_arch::hexagon::v128 | q6_vb_vlut32_vbvbr | function | ||
| 1037 | core::core_arch::hexagon::v128 | q6_vb_vlut32_vbvbr_nomatch | function | ||
| 1038 | core::core_arch::hexagon::v128 | q6_vb_vlut32or_vbvbvbi | function | ||
| 1039 | core::core_arch::hexagon::v128 | q6_vb_vlut32or_vbvbvbr | function | ||
| 1040 | core::core_arch::hexagon::v128 | q6_vb_vmax_vbvb | function | ||
| 1041 | core::core_arch::hexagon::v128 | q6_vb_vmin_vbvb | function | ||
| 1042 | core::core_arch::hexagon::v128 | q6_vb_vnavg_vbvb | function | ||
| 1043 | core::core_arch::hexagon::v128 | q6_vb_vnavg_vubvub | function | ||
| 1044 | core::core_arch::hexagon::v128 | q6_vb_vpack_vhvh_sat | function | ||
| 1045 | core::core_arch::hexagon::v128 | q6_vb_vpacke_vhvh | function | ||
| 1046 | core::core_arch::hexagon::v128 | q6_vb_vpacko_vhvh | function | ||
| 1047 | core::core_arch::hexagon::v128 | q6_vb_vround_vhvh_sat | function | ||
| 1048 | core::core_arch::hexagon::v128 | q6_vb_vshuff_vb | function | ||
| 1049 | core::core_arch::hexagon::v128 | q6_vb_vshuffe_vbvb | function | ||
| 1050 | core::core_arch::hexagon::v128 | q6_vb_vshuffo_vbvb | function | ||
| 1051 | core::core_arch::hexagon::v128 | q6_vb_vsplat_r | function | ||
| 1052 | core::core_arch::hexagon::v128 | q6_vb_vsub_vbvb | function | ||
| 1053 | core::core_arch::hexagon::v128 | q6_vb_vsub_vbvb_sat | function | ||
| 1054 | core::core_arch::hexagon::v128 | q6_vgather_aqrmvh | function | ||
| 1055 | core::core_arch::hexagon::v128 | q6_vgather_aqrmvw | function | ||
| 1056 | core::core_arch::hexagon::v128 | q6_vgather_aqrmww | function | ||
| 1057 | core::core_arch::hexagon::v128 | q6_vgather_armvh | function | ||
| 1058 | core::core_arch::hexagon::v128 | q6_vgather_armvw | function | ||
| 1059 | core::core_arch::hexagon::v128 | q6_vgather_armww | function | ||
| 1060 | core::core_arch::hexagon::v128 | q6_vh_condacc_qnvhvh | function | ||
| 1061 | core::core_arch::hexagon::v128 | q6_vh_condacc_qvhvh | function | ||
| 1062 | core::core_arch::hexagon::v128 | q6_vh_condnac_qnvhvh | function | ||
| 1063 | core::core_arch::hexagon::v128 | q6_vh_condnac_qvhvh | function | ||
| 1064 | core::core_arch::hexagon::v128 | q6_vh_equals_vhf | function | ||
| 1065 | core::core_arch::hexagon::v128 | q6_vh_prefixsum_q | function | ||
| 1066 | core::core_arch::hexagon::v128 | q6_vh_vabs_vh | function | ||
| 1067 | core::core_arch::hexagon::v128 | q6_vh_vabs_vh_sat | function | ||
| 1068 | core::core_arch::hexagon::v128 | q6_vh_vadd_vclb_vhvh | function | ||
| 1069 | core::core_arch::hexagon::v128 | q6_vh_vadd_vhvh | function | ||
| 1070 | core::core_arch::hexagon::v128 | q6_vh_vadd_vhvh_sat | function | ||
| 1071 | core::core_arch::hexagon::v128 | q6_vh_vasl_vhr | function | ||
| 1072 | core::core_arch::hexagon::v128 | q6_vh_vasl_vhvh | function | ||
| 1073 | core::core_arch::hexagon::v128 | q6_vh_vaslacc_vhvhr | function | ||
| 1074 | core::core_arch::hexagon::v128 | q6_vh_vasr_vhr | function | ||
| 1075 | core::core_arch::hexagon::v128 | q6_vh_vasr_vhvh | function | ||
| 1076 | core::core_arch::hexagon::v128 | q6_vh_vasr_vwvwr | function | ||
| 1077 | core::core_arch::hexagon::v128 | q6_vh_vasr_vwvwr_rnd_sat | function | ||
| 1078 | core::core_arch::hexagon::v128 | q6_vh_vasr_vwvwr_sat | function | ||
| 1079 | core::core_arch::hexagon::v128 | q6_vh_vasracc_vhvhr | function | ||
| 1080 | core::core_arch::hexagon::v128 | q6_vh_vavg_vhvh | function | ||
| 1081 | core::core_arch::hexagon::v128 | q6_vh_vavg_vhvh_rnd | function | ||
| 1082 | core::core_arch::hexagon::v128 | q6_vh_vcvt_vhf | function | ||
| 1083 | core::core_arch::hexagon::v128 | q6_vh_vdeal_vh | function | ||
| 1084 | core::core_arch::hexagon::v128 | q6_vh_vdmpy_vubrb | function | ||
| 1085 | core::core_arch::hexagon::v128 | q6_vh_vdmpyacc_vhvubrb | function | ||
| 1086 | core::core_arch::hexagon::v128 | q6_vh_vlsr_vhvh | function | ||
| 1087 | core::core_arch::hexagon::v128 | q6_vh_vmax_vhvh | function | ||
| 1088 | core::core_arch::hexagon::v128 | q6_vh_vmin_vhvh | function | ||
| 1089 | core::core_arch::hexagon::v128 | q6_vh_vmpy_vhrh_s1_rnd_sat | function | ||
| 1090 | core::core_arch::hexagon::v128 | q6_vh_vmpy_vhrh_s1_sat | function | ||
| 1091 | core::core_arch::hexagon::v128 | q6_vh_vmpy_vhvh_s1_rnd_sat | function | ||
| 1092 | core::core_arch::hexagon::v128 | q6_vh_vmpyi_vhrb | function | ||
| 1093 | core::core_arch::hexagon::v128 | q6_vh_vmpyi_vhvh | function | ||
| 1094 | core::core_arch::hexagon::v128 | q6_vh_vmpyiacc_vhvhrb | function | ||
| 1095 | core::core_arch::hexagon::v128 | q6_vh_vmpyiacc_vhvhvh | function | ||
| 1096 | core::core_arch::hexagon::v128 | q6_vh_vnavg_vhvh | function | ||
| 1097 | core::core_arch::hexagon::v128 | q6_vh_vnormamt_vh | function | ||
| 1098 | core::core_arch::hexagon::v128 | q6_vh_vpack_vwvw_sat | function | ||
| 1099 | core::core_arch::hexagon::v128 | q6_vh_vpacke_vwvw | function | ||
| 1100 | core::core_arch::hexagon::v128 | q6_vh_vpacko_vwvw | function | ||
| 1101 | core::core_arch::hexagon::v128 | q6_vh_vpopcount_vh | function | ||
| 1102 | core::core_arch::hexagon::v128 | q6_vh_vround_vwvw_sat | function | ||
| 1103 | core::core_arch::hexagon::v128 | q6_vh_vsat_vwvw | function | ||
| 1104 | core::core_arch::hexagon::v128 | q6_vh_vshuff_vh | function | ||
| 1105 | core::core_arch::hexagon::v128 | q6_vh_vshuffe_vhvh | function | ||
| 1106 | core::core_arch::hexagon::v128 | q6_vh_vshuffo_vhvh | function | ||
| 1107 | core::core_arch::hexagon::v128 | q6_vh_vsplat_r | function | ||
| 1108 | core::core_arch::hexagon::v128 | q6_vh_vsub_vhvh | function | ||
| 1109 | core::core_arch::hexagon::v128 | q6_vh_vsub_vhvh_sat | function | ||
| 1110 | core::core_arch::hexagon::v128 | q6_vhf_equals_vh | function | ||
| 1111 | core::core_arch::hexagon::v128 | q6_vhf_equals_vqf16 | function | ||
| 1112 | core::core_arch::hexagon::v128 | q6_vhf_equals_wqf32 | function | ||
| 1113 | core::core_arch::hexagon::v128 | q6_vhf_vabs_vhf | function | ||
| 1114 | core::core_arch::hexagon::v128 | q6_vhf_vadd_vhfvhf | function | ||
| 1115 | core::core_arch::hexagon::v128 | q6_vhf_vcvt_vh | function | ||
| 1116 | core::core_arch::hexagon::v128 | q6_vhf_vcvt_vsfvsf | function | ||
| 1117 | core::core_arch::hexagon::v128 | q6_vhf_vcvt_vuh | function | ||
| 1118 | core::core_arch::hexagon::v128 | q6_vhf_vfmax_vhfvhf | function | ||
| 1119 | core::core_arch::hexagon::v128 | q6_vhf_vfmin_vhfvhf | function | ||
| 1120 | core::core_arch::hexagon::v128 | q6_vhf_vfneg_vhf | function | ||
| 1121 | core::core_arch::hexagon::v128 | q6_vhf_vmax_vhfvhf | function | ||
| 1122 | core::core_arch::hexagon::v128 | q6_vhf_vmin_vhfvhf | function | ||
| 1123 | core::core_arch::hexagon::v128 | q6_vhf_vmpy_vhfvhf | function | ||
| 1124 | core::core_arch::hexagon::v128 | q6_vhf_vmpyacc_vhfvhfvhf | function | ||
| 1125 | core::core_arch::hexagon::v128 | q6_vhf_vsub_vhfvhf | function | ||
| 1126 | core::core_arch::hexagon::v128 | q6_vmem_qnriv | function | ||
| 1127 | core::core_arch::hexagon::v128 | q6_vmem_qnriv_nt | function | ||
| 1128 | core::core_arch::hexagon::v128 | q6_vmem_qriv | function | ||
| 1129 | core::core_arch::hexagon::v128 | q6_vmem_qriv_nt | function | ||
| 1130 | core::core_arch::hexagon::v128 | q6_vqf16_vadd_vhfvhf | function | ||
| 1131 | core::core_arch::hexagon::v128 | q6_vqf16_vadd_vqf16vhf | function | ||
| 1132 | core::core_arch::hexagon::v128 | q6_vqf16_vadd_vqf16vqf16 | function | ||
| 1133 | core::core_arch::hexagon::v128 | q6_vqf16_vmpy_vhfvhf | function | ||
| 1134 | core::core_arch::hexagon::v128 | q6_vqf16_vmpy_vqf16vhf | function | ||
| 1135 | core::core_arch::hexagon::v128 | q6_vqf16_vmpy_vqf16vqf16 | function | ||
| 1136 | core::core_arch::hexagon::v128 | q6_vqf16_vsub_vhfvhf | function | ||
| 1137 | core::core_arch::hexagon::v128 | q6_vqf16_vsub_vqf16vhf | function | ||
| 1138 | core::core_arch::hexagon::v128 | q6_vqf16_vsub_vqf16vqf16 | function | ||
| 1139 | core::core_arch::hexagon::v128 | q6_vqf32_vadd_vqf32vqf32 | function | ||
| 1140 | core::core_arch::hexagon::v128 | q6_vqf32_vadd_vqf32vsf | function | ||
| 1141 | core::core_arch::hexagon::v128 | q6_vqf32_vadd_vsfvsf | function | ||
| 1142 | core::core_arch::hexagon::v128 | q6_vqf32_vmpy_vqf32vqf32 | function | ||
| 1143 | core::core_arch::hexagon::v128 | q6_vqf32_vmpy_vsfvsf | function | ||
| 1144 | core::core_arch::hexagon::v128 | q6_vqf32_vsub_vqf32vqf32 | function | ||
| 1145 | core::core_arch::hexagon::v128 | q6_vqf32_vsub_vqf32vsf | function | ||
| 1146 | core::core_arch::hexagon::v128 | q6_vqf32_vsub_vsfvsf | function | ||
| 1147 | core::core_arch::hexagon::v128 | q6_vscatter_qrmvhv | function | ||
| 1148 | core::core_arch::hexagon::v128 | q6_vscatter_qrmvwv | function | ||
| 1149 | core::core_arch::hexagon::v128 | q6_vscatter_qrmwwv | function | ||
| 1150 | core::core_arch::hexagon::v128 | q6_vscatter_rmvhv | function | ||
| 1151 | core::core_arch::hexagon::v128 | q6_vscatter_rmvwv | function | ||
| 1152 | core::core_arch::hexagon::v128 | q6_vscatter_rmwwv | function | ||
| 1153 | core::core_arch::hexagon::v128 | q6_vscatteracc_rmvhv | function | ||
| 1154 | core::core_arch::hexagon::v128 | q6_vscatteracc_rmvwv | function | ||
| 1155 | core::core_arch::hexagon::v128 | q6_vscatteracc_rmwwv | function | ||
| 1156 | core::core_arch::hexagon::v128 | q6_vsf_equals_vqf32 | function | ||
| 1157 | core::core_arch::hexagon::v128 | q6_vsf_equals_vw | function | ||
| 1158 | core::core_arch::hexagon::v128 | q6_vsf_vabs_vsf | function | ||
| 1159 | core::core_arch::hexagon::v128 | q6_vsf_vadd_vsfvsf | function | ||
| 1160 | core::core_arch::hexagon::v128 | q6_vsf_vdmpy_vhfvhf | function | ||
| 1161 | core::core_arch::hexagon::v128 | q6_vsf_vdmpyacc_vsfvhfvhf | function | ||
| 1162 | core::core_arch::hexagon::v128 | q6_vsf_vfmax_vsfvsf | function | ||
| 1163 | core::core_arch::hexagon::v128 | q6_vsf_vfmin_vsfvsf | function | ||
| 1164 | core::core_arch::hexagon::v128 | q6_vsf_vfneg_vsf | function | ||
| 1165 | core::core_arch::hexagon::v128 | q6_vsf_vmax_vsfvsf | function | ||
| 1166 | core::core_arch::hexagon::v128 | q6_vsf_vmin_vsfvsf | function | ||
| 1167 | core::core_arch::hexagon::v128 | q6_vsf_vmpy_vsfvsf | function | ||
| 1168 | core::core_arch::hexagon::v128 | q6_vsf_vsub_vsfvsf | function | ||
| 1169 | core::core_arch::hexagon::v128 | q6_vub_vabsdiff_vubvub | function | ||
| 1170 | core::core_arch::hexagon::v128 | q6_vub_vadd_vubvb_sat | function | ||
| 1171 | core::core_arch::hexagon::v128 | q6_vub_vadd_vubvub_sat | function | ||
| 1172 | core::core_arch::hexagon::v128 | q6_vub_vasr_vhvhr_rnd_sat | function | ||
| 1173 | core::core_arch::hexagon::v128 | q6_vub_vasr_vhvhr_sat | function | ||
| 1174 | core::core_arch::hexagon::v128 | q6_vub_vasr_vuhvuhr_rnd_sat | function | ||
| 1175 | core::core_arch::hexagon::v128 | q6_vub_vasr_vuhvuhr_sat | function | ||
| 1176 | core::core_arch::hexagon::v128 | q6_vub_vasr_wuhvub_rnd_sat | function | ||
| 1177 | core::core_arch::hexagon::v128 | q6_vub_vasr_wuhvub_sat | function | ||
| 1178 | core::core_arch::hexagon::v128 | q6_vub_vavg_vubvub | function | ||
| 1179 | core::core_arch::hexagon::v128 | q6_vub_vavg_vubvub_rnd | function | ||
| 1180 | core::core_arch::hexagon::v128 | q6_vub_vcvt_vhfvhf | function | ||
| 1181 | core::core_arch::hexagon::v128 | q6_vub_vlsr_vubr | function | ||
| 1182 | core::core_arch::hexagon::v128 | q6_vub_vmax_vubvub | function | ||
| 1183 | core::core_arch::hexagon::v128 | q6_vub_vmin_vubvub | function | ||
| 1184 | core::core_arch::hexagon::v128 | q6_vub_vpack_vhvh_sat | function | ||
| 1185 | core::core_arch::hexagon::v128 | q6_vub_vround_vhvh_sat | function | ||
| 1186 | core::core_arch::hexagon::v128 | q6_vub_vround_vuhvuh_sat | function | ||
| 1187 | core::core_arch::hexagon::v128 | q6_vub_vsat_vhvh | function | ||
| 1188 | core::core_arch::hexagon::v128 | q6_vub_vsub_vubvb_sat | function | ||
| 1189 | core::core_arch::hexagon::v128 | q6_vub_vsub_vubvub_sat | function | ||
| 1190 | core::core_arch::hexagon::v128 | q6_vuh_vabsdiff_vhvh | function | ||
| 1191 | core::core_arch::hexagon::v128 | q6_vuh_vabsdiff_vuhvuh | function | ||
| 1192 | core::core_arch::hexagon::v128 | q6_vuh_vadd_vuhvuh_sat | function | ||
| 1193 | core::core_arch::hexagon::v128 | q6_vuh_vasr_vuwvuwr_rnd_sat | function | ||
| 1194 | core::core_arch::hexagon::v128 | q6_vuh_vasr_vuwvuwr_sat | function | ||
| 1195 | core::core_arch::hexagon::v128 | q6_vuh_vasr_vwvwr_rnd_sat | function | ||
| 1196 | core::core_arch::hexagon::v128 | q6_vuh_vasr_vwvwr_sat | function | ||
| 1197 | core::core_arch::hexagon::v128 | q6_vuh_vasr_wwvuh_rnd_sat | function | ||
| 1198 | core::core_arch::hexagon::v128 | q6_vuh_vasr_wwvuh_sat | function | ||
| 1199 | core::core_arch::hexagon::v128 | q6_vuh_vavg_vuhvuh | function | ||
| 1200 | core::core_arch::hexagon::v128 | q6_vuh_vavg_vuhvuh_rnd | function | ||
| 1201 | core::core_arch::hexagon::v128 | q6_vuh_vcl0_vuh | function | ||
| 1202 | core::core_arch::hexagon::v128 | q6_vuh_vcvt_vhf | function | ||
| 1203 | core::core_arch::hexagon::v128 | q6_vuh_vlsr_vuhr | function | ||
| 1204 | core::core_arch::hexagon::v128 | q6_vuh_vmax_vuhvuh | function | ||
| 1205 | core::core_arch::hexagon::v128 | q6_vuh_vmin_vuhvuh | function | ||
| 1206 | core::core_arch::hexagon::v128 | q6_vuh_vmpy_vuhvuh_rs16 | function | ||
| 1207 | core::core_arch::hexagon::v128 | q6_vuh_vpack_vwvw_sat | function | ||
| 1208 | core::core_arch::hexagon::v128 | q6_vuh_vround_vuwvuw_sat | function | ||
| 1209 | core::core_arch::hexagon::v128 | q6_vuh_vround_vwvw_sat | function | ||
| 1210 | core::core_arch::hexagon::v128 | q6_vuh_vsat_vuwvuw | function | ||
| 1211 | core::core_arch::hexagon::v128 | q6_vuh_vsub_vuhvuh_sat | function | ||
| 1212 | core::core_arch::hexagon::v128 | q6_vuw_vabsdiff_vwvw | function | ||
| 1213 | core::core_arch::hexagon::v128 | q6_vuw_vadd_vuwvuw_sat | function | ||
| 1214 | core::core_arch::hexagon::v128 | q6_vuw_vavg_vuwvuw | function | ||
| 1215 | core::core_arch::hexagon::v128 | q6_vuw_vavg_vuwvuw_rnd | function | ||
| 1216 | core::core_arch::hexagon::v128 | q6_vuw_vcl0_vuw | function | ||
| 1217 | core::core_arch::hexagon::v128 | q6_vuw_vlsr_vuwr | function | ||
| 1218 | core::core_arch::hexagon::v128 | q6_vuw_vmpye_vuhruh | function | ||
| 1219 | core::core_arch::hexagon::v128 | q6_vuw_vmpyeacc_vuwvuhruh | function | ||
| 1220 | core::core_arch::hexagon::v128 | q6_vuw_vrmpy_vubrub | function | ||
| 1221 | core::core_arch::hexagon::v128 | q6_vuw_vrmpy_vubvub | function | ||
| 1222 | core::core_arch::hexagon::v128 | q6_vuw_vrmpyacc_vuwvubrub | function | ||
| 1223 | core::core_arch::hexagon::v128 | q6_vuw_vrmpyacc_vuwvubvub | function | ||
| 1224 | core::core_arch::hexagon::v128 | q6_vuw_vrotr_vuwvuw | function | ||
| 1225 | core::core_arch::hexagon::v128 | q6_vuw_vsub_vuwvuw_sat | function | ||
| 1226 | core::core_arch::hexagon::v128 | q6_vw_condacc_qnvwvw | function | ||
| 1227 | core::core_arch::hexagon::v128 | q6_vw_condacc_qvwvw | function | ||
| 1228 | core::core_arch::hexagon::v128 | q6_vw_condnac_qnvwvw | function | ||
| 1229 | core::core_arch::hexagon::v128 | q6_vw_condnac_qvwvw | function | ||
| 1230 | core::core_arch::hexagon::v128 | q6_vw_equals_vsf | function | ||
| 1231 | core::core_arch::hexagon::v128 | q6_vw_prefixsum_q | function | ||
| 1232 | core::core_arch::hexagon::v128 | q6_vw_vabs_vw | function | ||
| 1233 | core::core_arch::hexagon::v128 | q6_vw_vabs_vw_sat | function | ||
| 1234 | core::core_arch::hexagon::v128 | q6_vw_vadd_vclb_vwvw | function | ||
| 1235 | core::core_arch::hexagon::v128 | q6_vw_vadd_vwvw | function | ||
| 1236 | core::core_arch::hexagon::v128 | q6_vw_vadd_vwvw_sat | function | ||
| 1237 | core::core_arch::hexagon::v128 | q6_vw_vadd_vwvwq_carry_sat | function | ||
| 1238 | core::core_arch::hexagon::v128 | q6_vw_vasl_vwr | function | ||
| 1239 | core::core_arch::hexagon::v128 | q6_vw_vasl_vwvw | function | ||
| 1240 | core::core_arch::hexagon::v128 | q6_vw_vaslacc_vwvwr | function | ||
| 1241 | core::core_arch::hexagon::v128 | q6_vw_vasr_vwr | function | ||
| 1242 | core::core_arch::hexagon::v128 | q6_vw_vasr_vwvw | function | ||
| 1243 | core::core_arch::hexagon::v128 | q6_vw_vasracc_vwvwr | function | ||
| 1244 | core::core_arch::hexagon::v128 | q6_vw_vavg_vwvw | function | ||
| 1245 | core::core_arch::hexagon::v128 | q6_vw_vavg_vwvw_rnd | function | ||
| 1246 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_vhrb | function | ||
| 1247 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_vhrh_sat | function | ||
| 1248 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_vhruh_sat | function | ||
| 1249 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_vhvh_sat | function | ||
| 1250 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_whrh_sat | function | ||
| 1251 | core::core_arch::hexagon::v128 | q6_vw_vdmpy_whruh_sat | function | ||
| 1252 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwvhrb | function | ||
| 1253 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwvhrh_sat | function | ||
| 1254 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwvhruh_sat | function | ||
| 1255 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwvhvh_sat | function | ||
| 1256 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwwhrh_sat | function | ||
| 1257 | core::core_arch::hexagon::v128 | q6_vw_vdmpyacc_vwwhruh_sat | function | ||
| 1258 | core::core_arch::hexagon::v128 | q6_vw_vfmv_vw | function | ||
| 1259 | core::core_arch::hexagon::v128 | q6_vw_vinsert_vwr | function | ||
| 1260 | core::core_arch::hexagon::v128 | q6_vw_vlsr_vwvw | function | ||
| 1261 | core::core_arch::hexagon::v128 | q6_vw_vmax_vwvw | function | ||
| 1262 | core::core_arch::hexagon::v128 | q6_vw_vmin_vwvw | function | ||
| 1263 | core::core_arch::hexagon::v128 | q6_vw_vmpye_vwvuh | function | ||
| 1264 | core::core_arch::hexagon::v128 | q6_vw_vmpyi_vwrb | function | ||
| 1265 | core::core_arch::hexagon::v128 | q6_vw_vmpyi_vwrh | function | ||
| 1266 | core::core_arch::hexagon::v128 | q6_vw_vmpyi_vwrub | function | ||
| 1267 | core::core_arch::hexagon::v128 | q6_vw_vmpyiacc_vwvwrb | function | ||
| 1268 | core::core_arch::hexagon::v128 | q6_vw_vmpyiacc_vwvwrh | function | ||
| 1269 | core::core_arch::hexagon::v128 | q6_vw_vmpyiacc_vwvwrub | function | ||
| 1270 | core::core_arch::hexagon::v128 | q6_vw_vmpyie_vwvuh | function | ||
| 1271 | core::core_arch::hexagon::v128 | q6_vw_vmpyieacc_vwvwvh | function | ||
| 1272 | core::core_arch::hexagon::v128 | q6_vw_vmpyieacc_vwvwvuh | function | ||
| 1273 | core::core_arch::hexagon::v128 | q6_vw_vmpyieo_vhvh | function | ||
| 1274 | core::core_arch::hexagon::v128 | q6_vw_vmpyio_vwvh | function | ||
| 1275 | core::core_arch::hexagon::v128 | q6_vw_vmpyo_vwvh_s1_rnd_sat | function | ||
| 1276 | core::core_arch::hexagon::v128 | q6_vw_vmpyo_vwvh_s1_sat | function | ||
| 1277 | core::core_arch::hexagon::v128 | q6_vw_vmpyoacc_vwvwvh_s1_rnd_sat_shift | function | ||
| 1278 | core::core_arch::hexagon::v128 | q6_vw_vmpyoacc_vwvwvh_s1_sat_shift | function | ||
| 1279 | core::core_arch::hexagon::v128 | q6_vw_vnavg_vwvw | function | ||
| 1280 | core::core_arch::hexagon::v128 | q6_vw_vnormamt_vw | function | ||
| 1281 | core::core_arch::hexagon::v128 | q6_vw_vrmpy_vbvb | function | ||
| 1282 | core::core_arch::hexagon::v128 | q6_vw_vrmpy_vubrb | function | ||
| 1283 | core::core_arch::hexagon::v128 | q6_vw_vrmpy_vubvb | function | ||
| 1284 | core::core_arch::hexagon::v128 | q6_vw_vrmpyacc_vwvbvb | function | ||
| 1285 | core::core_arch::hexagon::v128 | q6_vw_vrmpyacc_vwvubrb | function | ||
| 1286 | core::core_arch::hexagon::v128 | q6_vw_vrmpyacc_vwvubvb | function | ||
| 1287 | core::core_arch::hexagon::v128 | q6_vw_vsatdw_vwvw | function | ||
| 1288 | core::core_arch::hexagon::v128 | q6_vw_vsub_vwvw | function | ||
| 1289 | core::core_arch::hexagon::v128 | q6_vw_vsub_vwvw_sat | function | ||
| 1290 | core::core_arch::hexagon::v128 | q6_w_equals_w | function | ||
| 1291 | core::core_arch::hexagon::v128 | q6_w_vcombine_vv | function | ||
| 1292 | core::core_arch::hexagon::v128 | q6_w_vdeal_vvr | function | ||
| 1293 | core::core_arch::hexagon::v128 | q6_w_vmpye_vwvuh | function | ||
| 1294 | core::core_arch::hexagon::v128 | q6_w_vmpyoacc_wvwvh | function | ||
| 1295 | core::core_arch::hexagon::v128 | q6_w_vshuff_vvr | function | ||
| 1296 | core::core_arch::hexagon::v128 | q6_w_vswap_qvv | function | ||
| 1297 | core::core_arch::hexagon::v128 | q6_w_vzero | function | ||
| 1298 | core::core_arch::hexagon::v128 | q6_wb_vadd_wbwb | function | ||
| 1299 | core::core_arch::hexagon::v128 | q6_wb_vadd_wbwb_sat | function | ||
| 1300 | core::core_arch::hexagon::v128 | q6_wb_vshuffoe_vbvb | function | ||
| 1301 | core::core_arch::hexagon::v128 | q6_wb_vsub_wbwb | function | ||
| 1302 | core::core_arch::hexagon::v128 | q6_wb_vsub_wbwb_sat | function | ||
| 1303 | core::core_arch::hexagon::v128 | q6_wh_vadd_vubvub | function | ||
| 1304 | core::core_arch::hexagon::v128 | q6_wh_vadd_whwh | function | ||
| 1305 | core::core_arch::hexagon::v128 | q6_wh_vadd_whwh_sat | function | ||
| 1306 | core::core_arch::hexagon::v128 | q6_wh_vaddacc_whvubvub | function | ||
| 1307 | core::core_arch::hexagon::v128 | q6_wh_vdmpy_wubrb | function | ||
| 1308 | core::core_arch::hexagon::v128 | q6_wh_vdmpyacc_whwubrb | function | ||
| 1309 | core::core_arch::hexagon::v128 | q6_wh_vlut16_vbvhi | function | ||
| 1310 | core::core_arch::hexagon::v128 | q6_wh_vlut16_vbvhr | function | ||
| 1311 | core::core_arch::hexagon::v128 | q6_wh_vlut16_vbvhr_nomatch | function | ||
| 1312 | core::core_arch::hexagon::v128 | q6_wh_vlut16or_whvbvhi | function | ||
| 1313 | core::core_arch::hexagon::v128 | q6_wh_vlut16or_whvbvhr | function | ||
| 1314 | core::core_arch::hexagon::v128 | q6_wh_vmpa_wubrb | function | ||
| 1315 | core::core_arch::hexagon::v128 | q6_wh_vmpa_wubrub | function | ||
| 1316 | core::core_arch::hexagon::v128 | q6_wh_vmpa_wubwb | function | ||
| 1317 | core::core_arch::hexagon::v128 | q6_wh_vmpa_wubwub | function | ||
| 1318 | core::core_arch::hexagon::v128 | q6_wh_vmpaacc_whwubrb | function | ||
| 1319 | core::core_arch::hexagon::v128 | q6_wh_vmpaacc_whwubrub | function | ||
| 1320 | core::core_arch::hexagon::v128 | q6_wh_vmpy_vbvb | function | ||
| 1321 | core::core_arch::hexagon::v128 | q6_wh_vmpy_vubrb | function | ||
| 1322 | core::core_arch::hexagon::v128 | q6_wh_vmpy_vubvb | function | ||
| 1323 | core::core_arch::hexagon::v128 | q6_wh_vmpyacc_whvbvb | function | ||
| 1324 | core::core_arch::hexagon::v128 | q6_wh_vmpyacc_whvubrb | function | ||
| 1325 | core::core_arch::hexagon::v128 | q6_wh_vmpyacc_whvubvb | function | ||
| 1326 | core::core_arch::hexagon::v128 | q6_wh_vshuffoe_vhvh | function | ||
| 1327 | core::core_arch::hexagon::v128 | q6_wh_vsub_vubvub | function | ||
| 1328 | core::core_arch::hexagon::v128 | q6_wh_vsub_whwh | function | ||
| 1329 | core::core_arch::hexagon::v128 | q6_wh_vsub_whwh_sat | function | ||
| 1330 | core::core_arch::hexagon::v128 | q6_wh_vsxt_vb | function | ||
| 1331 | core::core_arch::hexagon::v128 | q6_wh_vtmpy_wbrb | function | ||
| 1332 | core::core_arch::hexagon::v128 | q6_wh_vtmpy_wubrb | function | ||
| 1333 | core::core_arch::hexagon::v128 | q6_wh_vtmpyacc_whwbrb | function | ||
| 1334 | core::core_arch::hexagon::v128 | q6_wh_vtmpyacc_whwubrb | function | ||
| 1335 | core::core_arch::hexagon::v128 | q6_wh_vunpack_vb | function | ||
| 1336 | core::core_arch::hexagon::v128 | q6_wh_vunpackoor_whvb | function | ||
| 1337 | core::core_arch::hexagon::v128 | q6_whf_vcvt2_vb | function | ||
| 1338 | core::core_arch::hexagon::v128 | q6_whf_vcvt2_vub | function | ||
| 1339 | core::core_arch::hexagon::v128 | q6_whf_vcvt_v | function | ||
| 1340 | core::core_arch::hexagon::v128 | q6_whf_vcvt_vb | function | ||
| 1341 | core::core_arch::hexagon::v128 | q6_whf_vcvt_vub | function | ||
| 1342 | core::core_arch::hexagon::v128 | q6_wqf32_vmpy_vhfvhf | function | ||
| 1343 | core::core_arch::hexagon::v128 | q6_wqf32_vmpy_vqf16vhf | function | ||
| 1344 | core::core_arch::hexagon::v128 | q6_wqf32_vmpy_vqf16vqf16 | function | ||
| 1345 | core::core_arch::hexagon::v128 | q6_wsf_vadd_vhfvhf | function | ||
| 1346 | core::core_arch::hexagon::v128 | q6_wsf_vcvt_vhf | function | ||
| 1347 | core::core_arch::hexagon::v128 | q6_wsf_vmpy_vhfvhf | function | ||
| 1348 | core::core_arch::hexagon::v128 | q6_wsf_vmpyacc_wsfvhfvhf | function | ||
| 1349 | core::core_arch::hexagon::v128 | q6_wsf_vsub_vhfvhf | function | ||
| 1350 | core::core_arch::hexagon::v128 | q6_wub_vadd_wubwub_sat | function | ||
| 1351 | core::core_arch::hexagon::v128 | q6_wub_vsub_wubwub_sat | function | ||
| 1352 | core::core_arch::hexagon::v128 | q6_wuh_vadd_wuhwuh_sat | function | ||
| 1353 | core::core_arch::hexagon::v128 | q6_wuh_vmpy_vubrub | function | ||
| 1354 | core::core_arch::hexagon::v128 | q6_wuh_vmpy_vubvub | function | ||
| 1355 | core::core_arch::hexagon::v128 | q6_wuh_vmpyacc_wuhvubrub | function | ||
| 1356 | core::core_arch::hexagon::v128 | q6_wuh_vmpyacc_wuhvubvub | function | ||
| 1357 | core::core_arch::hexagon::v128 | q6_wuh_vsub_wuhwuh_sat | function | ||
| 1358 | core::core_arch::hexagon::v128 | q6_wuh_vunpack_vub | function | ||
| 1359 | core::core_arch::hexagon::v128 | q6_wuh_vzxt_vub | function | ||
| 1360 | core::core_arch::hexagon::v128 | q6_wuw_vadd_wuwwuw_sat | function | ||
| 1361 | core::core_arch::hexagon::v128 | q6_wuw_vdsad_wuhruh | function | ||
| 1362 | core::core_arch::hexagon::v128 | q6_wuw_vdsadacc_wuwwuhruh | function | ||
| 1363 | core::core_arch::hexagon::v128 | q6_wuw_vmpy_vuhruh | function | ||
| 1364 | core::core_arch::hexagon::v128 | q6_wuw_vmpy_vuhvuh | function | ||
| 1365 | core::core_arch::hexagon::v128 | q6_wuw_vmpyacc_wuwvuhruh | function | ||
| 1366 | core::core_arch::hexagon::v128 | q6_wuw_vmpyacc_wuwvuhvuh | function | ||
| 1367 | core::core_arch::hexagon::v128 | q6_wuw_vrmpy_wubrubi | function | ||
| 1368 | core::core_arch::hexagon::v128 | q6_wuw_vrmpyacc_wuwwubrubi | function | ||
| 1369 | core::core_arch::hexagon::v128 | q6_wuw_vrsad_wubrubi | function | ||
| 1370 | core::core_arch::hexagon::v128 | q6_wuw_vrsadacc_wuwwubrubi | function | ||
| 1371 | core::core_arch::hexagon::v128 | q6_wuw_vsub_wuwwuw_sat | function | ||
| 1372 | core::core_arch::hexagon::v128 | q6_wuw_vunpack_vuh | function | ||
| 1373 | core::core_arch::hexagon::v128 | q6_wuw_vzxt_vuh | function | ||
| 1374 | core::core_arch::hexagon::v128 | q6_ww_v6mpy_wubwbi_h | function | ||
| 1375 | core::core_arch::hexagon::v128 | q6_ww_v6mpy_wubwbi_v | function | ||
| 1376 | core::core_arch::hexagon::v128 | q6_ww_v6mpyacc_wwwubwbi_h | function | ||
| 1377 | core::core_arch::hexagon::v128 | q6_ww_v6mpyacc_wwwubwbi_v | function | ||
| 1378 | core::core_arch::hexagon::v128 | q6_ww_vadd_vhvh | function | ||
| 1379 | core::core_arch::hexagon::v128 | q6_ww_vadd_vuhvuh | function | ||
| 1380 | core::core_arch::hexagon::v128 | q6_ww_vadd_wwww | function | ||
| 1381 | core::core_arch::hexagon::v128 | q6_ww_vadd_wwww_sat | function | ||
| 1382 | core::core_arch::hexagon::v128 | q6_ww_vaddacc_wwvhvh | function | ||
| 1383 | core::core_arch::hexagon::v128 | q6_ww_vaddacc_wwvuhvuh | function | ||
| 1384 | core::core_arch::hexagon::v128 | q6_ww_vasrinto_wwvwvw | function | ||
| 1385 | core::core_arch::hexagon::v128 | q6_ww_vdmpy_whrb | function | ||
| 1386 | core::core_arch::hexagon::v128 | q6_ww_vdmpyacc_wwwhrb | function | ||
| 1387 | core::core_arch::hexagon::v128 | q6_ww_vmpa_whrb | function | ||
| 1388 | core::core_arch::hexagon::v128 | q6_ww_vmpa_wuhrb | function | ||
| 1389 | core::core_arch::hexagon::v128 | q6_ww_vmpaacc_wwwhrb | function | ||
| 1390 | core::core_arch::hexagon::v128 | q6_ww_vmpaacc_wwwuhrb | function | ||
| 1391 | core::core_arch::hexagon::v128 | q6_ww_vmpy_vhrh | function | ||
| 1392 | core::core_arch::hexagon::v128 | q6_ww_vmpy_vhvh | function | ||
| 1393 | core::core_arch::hexagon::v128 | q6_ww_vmpy_vhvuh | function | ||
| 1394 | core::core_arch::hexagon::v128 | q6_ww_vmpyacc_wwvhrh | function | ||
| 1395 | core::core_arch::hexagon::v128 | q6_ww_vmpyacc_wwvhrh_sat | function | ||
| 1396 | core::core_arch::hexagon::v128 | q6_ww_vmpyacc_wwvhvh | function | ||
| 1397 | core::core_arch::hexagon::v128 | q6_ww_vmpyacc_wwvhvuh | function | ||
| 1398 | core::core_arch::hexagon::v128 | q6_ww_vrmpy_wubrbi | function | ||
| 1399 | core::core_arch::hexagon::v128 | q6_ww_vrmpyacc_wwwubrbi | function | ||
| 1400 | core::core_arch::hexagon::v128 | q6_ww_vsub_vhvh | function | ||
| 1401 | core::core_arch::hexagon::v128 | q6_ww_vsub_vuhvuh | function | ||
| 1402 | core::core_arch::hexagon::v128 | q6_ww_vsub_wwww | function | ||
| 1403 | core::core_arch::hexagon::v128 | q6_ww_vsub_wwww_sat | function | ||
| 1404 | core::core_arch::hexagon::v128 | q6_ww_vsxt_vh | function | ||
| 1405 | core::core_arch::hexagon::v128 | q6_ww_vtmpy_whrb | function | ||
| 1406 | core::core_arch::hexagon::v128 | q6_ww_vtmpyacc_wwwhrb | function | ||
| 1407 | core::core_arch::hexagon::v128 | q6_ww_vunpack_vh | function | ||
| 1408 | core::core_arch::hexagon::v128 | q6_ww_vunpackoor_wwvh | function | ||
| 1409 | core::core_arch::hexagon::v64 | q6_q_and_qq | function | ||
| 1410 | core::core_arch::hexagon::v64 | q6_q_and_qqn | function | ||
| 1411 | core::core_arch::hexagon::v64 | q6_q_not_q | function | ||
| 1412 | core::core_arch::hexagon::v64 | q6_q_or_qq | function | ||
| 1413 | core::core_arch::hexagon::v64 | q6_q_or_qqn | function | ||
| 1414 | core::core_arch::hexagon::v64 | q6_q_vand_vr | function | ||
| 1415 | core::core_arch::hexagon::v64 | q6_q_vandor_qvr | function | ||
| 1416 | core::core_arch::hexagon::v64 | q6_q_vcmp_eq_vbvb | function | ||
| 1417 | core::core_arch::hexagon::v64 | q6_q_vcmp_eq_vhvh | function | ||
| 1418 | core::core_arch::hexagon::v64 | q6_q_vcmp_eq_vwvw | function | ||
| 1419 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqand_qvbvb | function | ||
| 1420 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqand_qvhvh | function | ||
| 1421 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqand_qvwvw | function | ||
| 1422 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqor_qvbvb | function | ||
| 1423 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqor_qvhvh | function | ||
| 1424 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqor_qvwvw | function | ||
| 1425 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqxacc_qvbvb | function | ||
| 1426 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqxacc_qvhvh | function | ||
| 1427 | core::core_arch::hexagon::v64 | q6_q_vcmp_eqxacc_qvwvw | function | ||
| 1428 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vbvb | function | ||
| 1429 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vhfvhf | function | ||
| 1430 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vhvh | function | ||
| 1431 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vsfvsf | function | ||
| 1432 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vubvub | function | ||
| 1433 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vuhvuh | function | ||
| 1434 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vuwvuw | function | ||
| 1435 | core::core_arch::hexagon::v64 | q6_q_vcmp_gt_vwvw | function | ||
| 1436 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvbvb | function | ||
| 1437 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvhfvhf | function | ||
| 1438 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvhvh | function | ||
| 1439 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvsfvsf | function | ||
| 1440 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvubvub | function | ||
| 1441 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvuhvuh | function | ||
| 1442 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvuwvuw | function | ||
| 1443 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtand_qvwvw | function | ||
| 1444 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvbvb | function | ||
| 1445 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvhfvhf | function | ||
| 1446 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvhvh | function | ||
| 1447 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvsfvsf | function | ||
| 1448 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvubvub | function | ||
| 1449 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvuhvuh | function | ||
| 1450 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvuwvuw | function | ||
| 1451 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtor_qvwvw | function | ||
| 1452 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvbvb | function | ||
| 1453 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvhfvhf | function | ||
| 1454 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvhvh | function | ||
| 1455 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvsfvsf | function | ||
| 1456 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvubvub | function | ||
| 1457 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvuhvuh | function | ||
| 1458 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvuwvuw | function | ||
| 1459 | core::core_arch::hexagon::v64 | q6_q_vcmp_gtxacc_qvwvw | function | ||
| 1460 | core::core_arch::hexagon::v64 | q6_q_vsetq2_r | function | ||
| 1461 | core::core_arch::hexagon::v64 | q6_q_vsetq_r | function | ||
| 1462 | core::core_arch::hexagon::v64 | q6_q_xor_qq | function | ||
| 1463 | core::core_arch::hexagon::v64 | q6_qb_vshuffe_qhqh | function | ||
| 1464 | core::core_arch::hexagon::v64 | q6_qh_vshuffe_qwqw | function | ||
| 1465 | core::core_arch::hexagon::v64 | q6_r_vextract_vr | function | ||
| 1466 | core::core_arch::hexagon::v64 | q6_v_equals_v | function | ||
| 1467 | core::core_arch::hexagon::v64 | q6_v_hi_w | function | ||
| 1468 | core::core_arch::hexagon::v64 | q6_v_lo_w | function | ||
| 1469 | core::core_arch::hexagon::v64 | q6_v_vabs_v | function | ||
| 1470 | core::core_arch::hexagon::v64 | q6_v_valign_vvi | function | ||
| 1471 | core::core_arch::hexagon::v64 | q6_v_valign_vvr | function | ||
| 1472 | core::core_arch::hexagon::v64 | q6_v_vand_qnr | function | ||
| 1473 | core::core_arch::hexagon::v64 | q6_v_vand_qnv | function | ||
| 1474 | core::core_arch::hexagon::v64 | q6_v_vand_qr | function | ||
| 1475 | core::core_arch::hexagon::v64 | q6_v_vand_qv | function | ||
| 1476 | core::core_arch::hexagon::v64 | q6_v_vand_vv | function | ||
| 1477 | core::core_arch::hexagon::v64 | q6_v_vandor_vqnr | function | ||
| 1478 | core::core_arch::hexagon::v64 | q6_v_vandor_vqr | function | ||
| 1479 | core::core_arch::hexagon::v64 | q6_v_vdelta_vv | function | ||
| 1480 | core::core_arch::hexagon::v64 | q6_v_vfmax_vv | function | ||
| 1481 | core::core_arch::hexagon::v64 | q6_v_vfmin_vv | function | ||
| 1482 | core::core_arch::hexagon::v64 | q6_v_vfneg_v | function | ||
| 1483 | core::core_arch::hexagon::v64 | q6_v_vgetqfext_vr | function | ||
| 1484 | core::core_arch::hexagon::v64 | q6_v_vlalign_vvi | function | ||
| 1485 | core::core_arch::hexagon::v64 | q6_v_vlalign_vvr | function | ||
| 1486 | core::core_arch::hexagon::v64 | q6_v_vmux_qvv | function | ||
| 1487 | core::core_arch::hexagon::v64 | q6_v_vnot_v | function | ||
| 1488 | core::core_arch::hexagon::v64 | q6_v_vor_vv | function | ||
| 1489 | core::core_arch::hexagon::v64 | q6_v_vrdelta_vv | function | ||
| 1490 | core::core_arch::hexagon::v64 | q6_v_vror_vr | function | ||
| 1491 | core::core_arch::hexagon::v64 | q6_v_vsetqfext_vr | function | ||
| 1492 | core::core_arch::hexagon::v64 | q6_v_vsplat_r | function | ||
| 1493 | core::core_arch::hexagon::v64 | q6_v_vxor_vv | function | ||
| 1494 | core::core_arch::hexagon::v64 | q6_v_vzero | function | ||
| 1495 | core::core_arch::hexagon::v64 | q6_vb_condacc_qnvbvb | function | ||
| 1496 | core::core_arch::hexagon::v64 | q6_vb_condacc_qvbvb | function | ||
| 1497 | core::core_arch::hexagon::v64 | q6_vb_condnac_qnvbvb | function | ||
| 1498 | core::core_arch::hexagon::v64 | q6_vb_condnac_qvbvb | function | ||
| 1499 | core::core_arch::hexagon::v64 | q6_vb_prefixsum_q | function | ||
| 1500 | core::core_arch::hexagon::v64 | q6_vb_vabs_vb | function | ||
| 1501 | core::core_arch::hexagon::v64 | q6_vb_vabs_vb_sat | function | ||
| 1502 | core::core_arch::hexagon::v64 | q6_vb_vadd_vbvb | function | ||
| 1503 | core::core_arch::hexagon::v64 | q6_vb_vadd_vbvb_sat | function | ||
| 1504 | core::core_arch::hexagon::v64 | q6_vb_vasr_vhvhr_rnd_sat | function | ||
| 1505 | core::core_arch::hexagon::v64 | q6_vb_vasr_vhvhr_sat | function | ||
| 1506 | core::core_arch::hexagon::v64 | q6_vb_vavg_vbvb | function | ||
| 1507 | core::core_arch::hexagon::v64 | q6_vb_vavg_vbvb_rnd | function | ||
| 1508 | core::core_arch::hexagon::v64 | q6_vb_vcvt_vhfvhf | function | ||
| 1509 | core::core_arch::hexagon::v64 | q6_vb_vdeal_vb | function | ||
| 1510 | core::core_arch::hexagon::v64 | q6_vb_vdeale_vbvb | function | ||
| 1511 | core::core_arch::hexagon::v64 | q6_vb_vlut32_vbvbi | function | ||
| 1512 | core::core_arch::hexagon::v64 | q6_vb_vlut32_vbvbr | function | ||
| 1513 | core::core_arch::hexagon::v64 | q6_vb_vlut32_vbvbr_nomatch | function | ||
| 1514 | core::core_arch::hexagon::v64 | q6_vb_vlut32or_vbvbvbi | function | ||
| 1515 | core::core_arch::hexagon::v64 | q6_vb_vlut32or_vbvbvbr | function | ||
| 1516 | core::core_arch::hexagon::v64 | q6_vb_vmax_vbvb | function | ||
| 1517 | core::core_arch::hexagon::v64 | q6_vb_vmin_vbvb | function | ||
| 1518 | core::core_arch::hexagon::v64 | q6_vb_vnavg_vbvb | function | ||
| 1519 | core::core_arch::hexagon::v64 | q6_vb_vnavg_vubvub | function | ||
| 1520 | core::core_arch::hexagon::v64 | q6_vb_vpack_vhvh_sat | function | ||
| 1521 | core::core_arch::hexagon::v64 | q6_vb_vpacke_vhvh | function | ||
| 1522 | core::core_arch::hexagon::v64 | q6_vb_vpacko_vhvh | function | ||
| 1523 | core::core_arch::hexagon::v64 | q6_vb_vround_vhvh_sat | function | ||
| 1524 | core::core_arch::hexagon::v64 | q6_vb_vshuff_vb | function | ||
| 1525 | core::core_arch::hexagon::v64 | q6_vb_vshuffe_vbvb | function | ||
| 1526 | core::core_arch::hexagon::v64 | q6_vb_vshuffo_vbvb | function | ||
| 1527 | core::core_arch::hexagon::v64 | q6_vb_vsplat_r | function | ||
| 1528 | core::core_arch::hexagon::v64 | q6_vb_vsub_vbvb | function | ||
| 1529 | core::core_arch::hexagon::v64 | q6_vb_vsub_vbvb_sat | function | ||
| 1530 | core::core_arch::hexagon::v64 | q6_vgather_aqrmvh | function | ||
| 1531 | core::core_arch::hexagon::v64 | q6_vgather_aqrmvw | function | ||
| 1532 | core::core_arch::hexagon::v64 | q6_vgather_aqrmww | function | ||
| 1533 | core::core_arch::hexagon::v64 | q6_vgather_armvh | function | ||
| 1534 | core::core_arch::hexagon::v64 | q6_vgather_armvw | function | ||
| 1535 | core::core_arch::hexagon::v64 | q6_vgather_armww | function | ||
| 1536 | core::core_arch::hexagon::v64 | q6_vh_condacc_qnvhvh | function | ||
| 1537 | core::core_arch::hexagon::v64 | q6_vh_condacc_qvhvh | function | ||
| 1538 | core::core_arch::hexagon::v64 | q6_vh_condnac_qnvhvh | function | ||
| 1539 | core::core_arch::hexagon::v64 | q6_vh_condnac_qvhvh | function | ||
| 1540 | core::core_arch::hexagon::v64 | q6_vh_equals_vhf | function | ||
| 1541 | core::core_arch::hexagon::v64 | q6_vh_prefixsum_q | function | ||
| 1542 | core::core_arch::hexagon::v64 | q6_vh_vabs_vh | function | ||
| 1543 | core::core_arch::hexagon::v64 | q6_vh_vabs_vh_sat | function | ||
| 1544 | core::core_arch::hexagon::v64 | q6_vh_vadd_vclb_vhvh | function | ||
| 1545 | core::core_arch::hexagon::v64 | q6_vh_vadd_vhvh | function | ||
| 1546 | core::core_arch::hexagon::v64 | q6_vh_vadd_vhvh_sat | function | ||
| 1547 | core::core_arch::hexagon::v64 | q6_vh_vasl_vhr | function | ||
| 1548 | core::core_arch::hexagon::v64 | q6_vh_vasl_vhvh | function | ||
| 1549 | core::core_arch::hexagon::v64 | q6_vh_vaslacc_vhvhr | function | ||
| 1550 | core::core_arch::hexagon::v64 | q6_vh_vasr_vhr | function | ||
| 1551 | core::core_arch::hexagon::v64 | q6_vh_vasr_vhvh | function | ||
| 1552 | core::core_arch::hexagon::v64 | q6_vh_vasr_vwvwr | function | ||
| 1553 | core::core_arch::hexagon::v64 | q6_vh_vasr_vwvwr_rnd_sat | function | ||
| 1554 | core::core_arch::hexagon::v64 | q6_vh_vasr_vwvwr_sat | function | ||
| 1555 | core::core_arch::hexagon::v64 | q6_vh_vasracc_vhvhr | function | ||
| 1556 | core::core_arch::hexagon::v64 | q6_vh_vavg_vhvh | function | ||
| 1557 | core::core_arch::hexagon::v64 | q6_vh_vavg_vhvh_rnd | function | ||
| 1558 | core::core_arch::hexagon::v64 | q6_vh_vcvt_vhf | function | ||
| 1559 | core::core_arch::hexagon::v64 | q6_vh_vdeal_vh | function | ||
| 1560 | core::core_arch::hexagon::v64 | q6_vh_vdmpy_vubrb | function | ||
| 1561 | core::core_arch::hexagon::v64 | q6_vh_vdmpyacc_vhvubrb | function | ||
| 1562 | core::core_arch::hexagon::v64 | q6_vh_vlsr_vhvh | function | ||
| 1563 | core::core_arch::hexagon::v64 | q6_vh_vmax_vhvh | function | ||
| 1564 | core::core_arch::hexagon::v64 | q6_vh_vmin_vhvh | function | ||
| 1565 | core::core_arch::hexagon::v64 | q6_vh_vmpy_vhrh_s1_rnd_sat | function | ||
| 1566 | core::core_arch::hexagon::v64 | q6_vh_vmpy_vhrh_s1_sat | function | ||
| 1567 | core::core_arch::hexagon::v64 | q6_vh_vmpy_vhvh_s1_rnd_sat | function | ||
| 1568 | core::core_arch::hexagon::v64 | q6_vh_vmpyi_vhrb | function | ||
| 1569 | core::core_arch::hexagon::v64 | q6_vh_vmpyi_vhvh | function | ||
| 1570 | core::core_arch::hexagon::v64 | q6_vh_vmpyiacc_vhvhrb | function | ||
| 1571 | core::core_arch::hexagon::v64 | q6_vh_vmpyiacc_vhvhvh | function | ||
| 1572 | core::core_arch::hexagon::v64 | q6_vh_vnavg_vhvh | function | ||
| 1573 | core::core_arch::hexagon::v64 | q6_vh_vnormamt_vh | function | ||
| 1574 | core::core_arch::hexagon::v64 | q6_vh_vpack_vwvw_sat | function | ||
| 1575 | core::core_arch::hexagon::v64 | q6_vh_vpacke_vwvw | function | ||
| 1576 | core::core_arch::hexagon::v64 | q6_vh_vpacko_vwvw | function | ||
| 1577 | core::core_arch::hexagon::v64 | q6_vh_vpopcount_vh | function | ||
| 1578 | core::core_arch::hexagon::v64 | q6_vh_vround_vwvw_sat | function | ||
| 1579 | core::core_arch::hexagon::v64 | q6_vh_vsat_vwvw | function | ||
| 1580 | core::core_arch::hexagon::v64 | q6_vh_vshuff_vh | function | ||
| 1581 | core::core_arch::hexagon::v64 | q6_vh_vshuffe_vhvh | function | ||
| 1582 | core::core_arch::hexagon::v64 | q6_vh_vshuffo_vhvh | function | ||
| 1583 | core::core_arch::hexagon::v64 | q6_vh_vsplat_r | function | ||
| 1584 | core::core_arch::hexagon::v64 | q6_vh_vsub_vhvh | function | ||
| 1585 | core::core_arch::hexagon::v64 | q6_vh_vsub_vhvh_sat | function | ||
| 1586 | core::core_arch::hexagon::v64 | q6_vhf_equals_vh | function | ||
| 1587 | core::core_arch::hexagon::v64 | q6_vhf_equals_vqf16 | function | ||
| 1588 | core::core_arch::hexagon::v64 | q6_vhf_equals_wqf32 | function | ||
| 1589 | core::core_arch::hexagon::v64 | q6_vhf_vabs_vhf | function | ||
| 1590 | core::core_arch::hexagon::v64 | q6_vhf_vadd_vhfvhf | function | ||
| 1591 | core::core_arch::hexagon::v64 | q6_vhf_vcvt_vh | function | ||
| 1592 | core::core_arch::hexagon::v64 | q6_vhf_vcvt_vsfvsf | function | ||
| 1593 | core::core_arch::hexagon::v64 | q6_vhf_vcvt_vuh | function | ||
| 1594 | core::core_arch::hexagon::v64 | q6_vhf_vfmax_vhfvhf | function | ||
| 1595 | core::core_arch::hexagon::v64 | q6_vhf_vfmin_vhfvhf | function | ||
| 1596 | core::core_arch::hexagon::v64 | q6_vhf_vfneg_vhf | function | ||
| 1597 | core::core_arch::hexagon::v64 | q6_vhf_vmax_vhfvhf | function | ||
| 1598 | core::core_arch::hexagon::v64 | q6_vhf_vmin_vhfvhf | function | ||
| 1599 | core::core_arch::hexagon::v64 | q6_vhf_vmpy_vhfvhf | function | ||
| 1600 | core::core_arch::hexagon::v64 | q6_vhf_vmpyacc_vhfvhfvhf | function | ||
| 1601 | core::core_arch::hexagon::v64 | q6_vhf_vsub_vhfvhf | function | ||
| 1602 | core::core_arch::hexagon::v64 | q6_vmem_qnriv | function | ||
| 1603 | core::core_arch::hexagon::v64 | q6_vmem_qnriv_nt | function | ||
| 1604 | core::core_arch::hexagon::v64 | q6_vmem_qriv | function | ||
| 1605 | core::core_arch::hexagon::v64 | q6_vmem_qriv_nt | function | ||
| 1606 | core::core_arch::hexagon::v64 | q6_vqf16_vadd_vhfvhf | function | ||
| 1607 | core::core_arch::hexagon::v64 | q6_vqf16_vadd_vqf16vhf | function | ||
| 1608 | core::core_arch::hexagon::v64 | q6_vqf16_vadd_vqf16vqf16 | function | ||
| 1609 | core::core_arch::hexagon::v64 | q6_vqf16_vmpy_vhfvhf | function | ||
| 1610 | core::core_arch::hexagon::v64 | q6_vqf16_vmpy_vqf16vhf | function | ||
| 1611 | core::core_arch::hexagon::v64 | q6_vqf16_vmpy_vqf16vqf16 | function | ||
| 1612 | core::core_arch::hexagon::v64 | q6_vqf16_vsub_vhfvhf | function | ||
| 1613 | core::core_arch::hexagon::v64 | q6_vqf16_vsub_vqf16vhf | function | ||
| 1614 | core::core_arch::hexagon::v64 | q6_vqf16_vsub_vqf16vqf16 | function | ||
| 1615 | core::core_arch::hexagon::v64 | q6_vqf32_vadd_vqf32vqf32 | function | ||
| 1616 | core::core_arch::hexagon::v64 | q6_vqf32_vadd_vqf32vsf | function | ||
| 1617 | core::core_arch::hexagon::v64 | q6_vqf32_vadd_vsfvsf | function | ||
| 1618 | core::core_arch::hexagon::v64 | q6_vqf32_vmpy_vqf32vqf32 | function | ||
| 1619 | core::core_arch::hexagon::v64 | q6_vqf32_vmpy_vsfvsf | function | ||
| 1620 | core::core_arch::hexagon::v64 | q6_vqf32_vsub_vqf32vqf32 | function | ||
| 1621 | core::core_arch::hexagon::v64 | q6_vqf32_vsub_vqf32vsf | function | ||
| 1622 | core::core_arch::hexagon::v64 | q6_vqf32_vsub_vsfvsf | function | ||
| 1623 | core::core_arch::hexagon::v64 | q6_vscatter_qrmvhv | function | ||
| 1624 | core::core_arch::hexagon::v64 | q6_vscatter_qrmvwv | function | ||
| 1625 | core::core_arch::hexagon::v64 | q6_vscatter_qrmwwv | function | ||
| 1626 | core::core_arch::hexagon::v64 | q6_vscatter_rmvhv | function | ||
| 1627 | core::core_arch::hexagon::v64 | q6_vscatter_rmvwv | function | ||
| 1628 | core::core_arch::hexagon::v64 | q6_vscatter_rmwwv | function | ||
| 1629 | core::core_arch::hexagon::v64 | q6_vscatteracc_rmvhv | function | ||
| 1630 | core::core_arch::hexagon::v64 | q6_vscatteracc_rmvwv | function | ||
| 1631 | core::core_arch::hexagon::v64 | q6_vscatteracc_rmwwv | function | ||
| 1632 | core::core_arch::hexagon::v64 | q6_vsf_equals_vqf32 | function | ||
| 1633 | core::core_arch::hexagon::v64 | q6_vsf_equals_vw | function | ||
| 1634 | core::core_arch::hexagon::v64 | q6_vsf_vabs_vsf | function | ||
| 1635 | core::core_arch::hexagon::v64 | q6_vsf_vadd_vsfvsf | function | ||
| 1636 | core::core_arch::hexagon::v64 | q6_vsf_vdmpy_vhfvhf | function | ||
| 1637 | core::core_arch::hexagon::v64 | q6_vsf_vdmpyacc_vsfvhfvhf | function | ||
| 1638 | core::core_arch::hexagon::v64 | q6_vsf_vfmax_vsfvsf | function | ||
| 1639 | core::core_arch::hexagon::v64 | q6_vsf_vfmin_vsfvsf | function | ||
| 1640 | core::core_arch::hexagon::v64 | q6_vsf_vfneg_vsf | function | ||
| 1641 | core::core_arch::hexagon::v64 | q6_vsf_vmax_vsfvsf | function | ||
| 1642 | core::core_arch::hexagon::v64 | q6_vsf_vmin_vsfvsf | function | ||
| 1643 | core::core_arch::hexagon::v64 | q6_vsf_vmpy_vsfvsf | function | ||
| 1644 | core::core_arch::hexagon::v64 | q6_vsf_vsub_vsfvsf | function | ||
| 1645 | core::core_arch::hexagon::v64 | q6_vub_vabsdiff_vubvub | function | ||
| 1646 | core::core_arch::hexagon::v64 | q6_vub_vadd_vubvb_sat | function | ||
| 1647 | core::core_arch::hexagon::v64 | q6_vub_vadd_vubvub_sat | function | ||
| 1648 | core::core_arch::hexagon::v64 | q6_vub_vasr_vhvhr_rnd_sat | function | ||
| 1649 | core::core_arch::hexagon::v64 | q6_vub_vasr_vhvhr_sat | function | ||
| 1650 | core::core_arch::hexagon::v64 | q6_vub_vasr_vuhvuhr_rnd_sat | function | ||
| 1651 | core::core_arch::hexagon::v64 | q6_vub_vasr_vuhvuhr_sat | function | ||
| 1652 | core::core_arch::hexagon::v64 | q6_vub_vasr_wuhvub_rnd_sat | function | ||
| 1653 | core::core_arch::hexagon::v64 | q6_vub_vasr_wuhvub_sat | function | ||
| 1654 | core::core_arch::hexagon::v64 | q6_vub_vavg_vubvub | function | ||
| 1655 | core::core_arch::hexagon::v64 | q6_vub_vavg_vubvub_rnd | function | ||
| 1656 | core::core_arch::hexagon::v64 | q6_vub_vcvt_vhfvhf | function | ||
| 1657 | core::core_arch::hexagon::v64 | q6_vub_vlsr_vubr | function | ||
| 1658 | core::core_arch::hexagon::v64 | q6_vub_vmax_vubvub | function | ||
| 1659 | core::core_arch::hexagon::v64 | q6_vub_vmin_vubvub | function | ||
| 1660 | core::core_arch::hexagon::v64 | q6_vub_vpack_vhvh_sat | function | ||
| 1661 | core::core_arch::hexagon::v64 | q6_vub_vround_vhvh_sat | function | ||
| 1662 | core::core_arch::hexagon::v64 | q6_vub_vround_vuhvuh_sat | function | ||
| 1663 | core::core_arch::hexagon::v64 | q6_vub_vsat_vhvh | function | ||
| 1664 | core::core_arch::hexagon::v64 | q6_vub_vsub_vubvb_sat | function | ||
| 1665 | core::core_arch::hexagon::v64 | q6_vub_vsub_vubvub_sat | function | ||
| 1666 | core::core_arch::hexagon::v64 | q6_vuh_vabsdiff_vhvh | function | ||
| 1667 | core::core_arch::hexagon::v64 | q6_vuh_vabsdiff_vuhvuh | function | ||
| 1668 | core::core_arch::hexagon::v64 | q6_vuh_vadd_vuhvuh_sat | function | ||
| 1669 | core::core_arch::hexagon::v64 | q6_vuh_vasr_vuwvuwr_rnd_sat | function | ||
| 1670 | core::core_arch::hexagon::v64 | q6_vuh_vasr_vuwvuwr_sat | function | ||
| 1671 | core::core_arch::hexagon::v64 | q6_vuh_vasr_vwvwr_rnd_sat | function | ||
| 1672 | core::core_arch::hexagon::v64 | q6_vuh_vasr_vwvwr_sat | function | ||
| 1673 | core::core_arch::hexagon::v64 | q6_vuh_vasr_wwvuh_rnd_sat | function | ||
| 1674 | core::core_arch::hexagon::v64 | q6_vuh_vasr_wwvuh_sat | function | ||
| 1675 | core::core_arch::hexagon::v64 | q6_vuh_vavg_vuhvuh | function | ||
| 1676 | core::core_arch::hexagon::v64 | q6_vuh_vavg_vuhvuh_rnd | function | ||
| 1677 | core::core_arch::hexagon::v64 | q6_vuh_vcl0_vuh | function | ||
| 1678 | core::core_arch::hexagon::v64 | q6_vuh_vcvt_vhf | function | ||
| 1679 | core::core_arch::hexagon::v64 | q6_vuh_vlsr_vuhr | function | ||
| 1680 | core::core_arch::hexagon::v64 | q6_vuh_vmax_vuhvuh | function | ||
| 1681 | core::core_arch::hexagon::v64 | q6_vuh_vmin_vuhvuh | function | ||
| 1682 | core::core_arch::hexagon::v64 | q6_vuh_vmpy_vuhvuh_rs16 | function | ||
| 1683 | core::core_arch::hexagon::v64 | q6_vuh_vpack_vwvw_sat | function | ||
| 1684 | core::core_arch::hexagon::v64 | q6_vuh_vround_vuwvuw_sat | function | ||
| 1685 | core::core_arch::hexagon::v64 | q6_vuh_vround_vwvw_sat | function | ||
| 1686 | core::core_arch::hexagon::v64 | q6_vuh_vsat_vuwvuw | function | ||
| 1687 | core::core_arch::hexagon::v64 | q6_vuh_vsub_vuhvuh_sat | function | ||
| 1688 | core::core_arch::hexagon::v64 | q6_vuw_vabsdiff_vwvw | function | ||
| 1689 | core::core_arch::hexagon::v64 | q6_vuw_vadd_vuwvuw_sat | function | ||
| 1690 | core::core_arch::hexagon::v64 | q6_vuw_vavg_vuwvuw | function | ||
| 1691 | core::core_arch::hexagon::v64 | q6_vuw_vavg_vuwvuw_rnd | function | ||
| 1692 | core::core_arch::hexagon::v64 | q6_vuw_vcl0_vuw | function | ||
| 1693 | core::core_arch::hexagon::v64 | q6_vuw_vlsr_vuwr | function | ||
| 1694 | core::core_arch::hexagon::v64 | q6_vuw_vmpye_vuhruh | function | ||
| 1695 | core::core_arch::hexagon::v64 | q6_vuw_vmpyeacc_vuwvuhruh | function | ||
| 1696 | core::core_arch::hexagon::v64 | q6_vuw_vrmpy_vubrub | function | ||
| 1697 | core::core_arch::hexagon::v64 | q6_vuw_vrmpy_vubvub | function | ||
| 1698 | core::core_arch::hexagon::v64 | q6_vuw_vrmpyacc_vuwvubrub | function | ||
| 1699 | core::core_arch::hexagon::v64 | q6_vuw_vrmpyacc_vuwvubvub | function | ||
| 1700 | core::core_arch::hexagon::v64 | q6_vuw_vrotr_vuwvuw | function | ||
| 1701 | core::core_arch::hexagon::v64 | q6_vuw_vsub_vuwvuw_sat | function | ||
| 1702 | core::core_arch::hexagon::v64 | q6_vw_condacc_qnvwvw | function | ||
| 1703 | core::core_arch::hexagon::v64 | q6_vw_condacc_qvwvw | function | ||
| 1704 | core::core_arch::hexagon::v64 | q6_vw_condnac_qnvwvw | function | ||
| 1705 | core::core_arch::hexagon::v64 | q6_vw_condnac_qvwvw | function | ||
| 1706 | core::core_arch::hexagon::v64 | q6_vw_equals_vsf | function | ||
| 1707 | core::core_arch::hexagon::v64 | q6_vw_prefixsum_q | function | ||
| 1708 | core::core_arch::hexagon::v64 | q6_vw_vabs_vw | function | ||
| 1709 | core::core_arch::hexagon::v64 | q6_vw_vabs_vw_sat | function | ||
| 1710 | core::core_arch::hexagon::v64 | q6_vw_vadd_vclb_vwvw | function | ||
| 1711 | core::core_arch::hexagon::v64 | q6_vw_vadd_vwvw | function | ||
| 1712 | core::core_arch::hexagon::v64 | q6_vw_vadd_vwvw_sat | function | ||
| 1713 | core::core_arch::hexagon::v64 | q6_vw_vadd_vwvwq_carry_sat | function | ||
| 1714 | core::core_arch::hexagon::v64 | q6_vw_vasl_vwr | function | ||
| 1715 | core::core_arch::hexagon::v64 | q6_vw_vasl_vwvw | function | ||
| 1716 | core::core_arch::hexagon::v64 | q6_vw_vaslacc_vwvwr | function | ||
| 1717 | core::core_arch::hexagon::v64 | q6_vw_vasr_vwr | function | ||
| 1718 | core::core_arch::hexagon::v64 | q6_vw_vasr_vwvw | function | ||
| 1719 | core::core_arch::hexagon::v64 | q6_vw_vasracc_vwvwr | function | ||
| 1720 | core::core_arch::hexagon::v64 | q6_vw_vavg_vwvw | function | ||
| 1721 | core::core_arch::hexagon::v64 | q6_vw_vavg_vwvw_rnd | function | ||
| 1722 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_vhrb | function | ||
| 1723 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_vhrh_sat | function | ||
| 1724 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_vhruh_sat | function | ||
| 1725 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_vhvh_sat | function | ||
| 1726 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_whrh_sat | function | ||
| 1727 | core::core_arch::hexagon::v64 | q6_vw_vdmpy_whruh_sat | function | ||
| 1728 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwvhrb | function | ||
| 1729 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwvhrh_sat | function | ||
| 1730 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwvhruh_sat | function | ||
| 1731 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwvhvh_sat | function | ||
| 1732 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwwhrh_sat | function | ||
| 1733 | core::core_arch::hexagon::v64 | q6_vw_vdmpyacc_vwwhruh_sat | function | ||
| 1734 | core::core_arch::hexagon::v64 | q6_vw_vfmv_vw | function | ||
| 1735 | core::core_arch::hexagon::v64 | q6_vw_vinsert_vwr | function | ||
| 1736 | core::core_arch::hexagon::v64 | q6_vw_vlsr_vwvw | function | ||
| 1737 | core::core_arch::hexagon::v64 | q6_vw_vmax_vwvw | function | ||
| 1738 | core::core_arch::hexagon::v64 | q6_vw_vmin_vwvw | function | ||
| 1739 | core::core_arch::hexagon::v64 | q6_vw_vmpye_vwvuh | function | ||
| 1740 | core::core_arch::hexagon::v64 | q6_vw_vmpyi_vwrb | function | ||
| 1741 | core::core_arch::hexagon::v64 | q6_vw_vmpyi_vwrh | function | ||
| 1742 | core::core_arch::hexagon::v64 | q6_vw_vmpyi_vwrub | function | ||
| 1743 | core::core_arch::hexagon::v64 | q6_vw_vmpyiacc_vwvwrb | function | ||
| 1744 | core::core_arch::hexagon::v64 | q6_vw_vmpyiacc_vwvwrh | function | ||
| 1745 | core::core_arch::hexagon::v64 | q6_vw_vmpyiacc_vwvwrub | function | ||
| 1746 | core::core_arch::hexagon::v64 | q6_vw_vmpyie_vwvuh | function | ||
| 1747 | core::core_arch::hexagon::v64 | q6_vw_vmpyieacc_vwvwvh | function | ||
| 1748 | core::core_arch::hexagon::v64 | q6_vw_vmpyieacc_vwvwvuh | function | ||
| 1749 | core::core_arch::hexagon::v64 | q6_vw_vmpyieo_vhvh | function | ||
| 1750 | core::core_arch::hexagon::v64 | q6_vw_vmpyio_vwvh | function | ||
| 1751 | core::core_arch::hexagon::v64 | q6_vw_vmpyo_vwvh_s1_rnd_sat | function | ||
| 1752 | core::core_arch::hexagon::v64 | q6_vw_vmpyo_vwvh_s1_sat | function | ||
| 1753 | core::core_arch::hexagon::v64 | q6_vw_vmpyoacc_vwvwvh_s1_rnd_sat_shift | function | ||
| 1754 | core::core_arch::hexagon::v64 | q6_vw_vmpyoacc_vwvwvh_s1_sat_shift | function | ||
| 1755 | core::core_arch::hexagon::v64 | q6_vw_vnavg_vwvw | function | ||
| 1756 | core::core_arch::hexagon::v64 | q6_vw_vnormamt_vw | function | ||
| 1757 | core::core_arch::hexagon::v64 | q6_vw_vrmpy_vbvb | function | ||
| 1758 | core::core_arch::hexagon::v64 | q6_vw_vrmpy_vubrb | function | ||
| 1759 | core::core_arch::hexagon::v64 | q6_vw_vrmpy_vubvb | function | ||
| 1760 | core::core_arch::hexagon::v64 | q6_vw_vrmpyacc_vwvbvb | function | ||
| 1761 | core::core_arch::hexagon::v64 | q6_vw_vrmpyacc_vwvubrb | function | ||
| 1762 | core::core_arch::hexagon::v64 | q6_vw_vrmpyacc_vwvubvb | function | ||
| 1763 | core::core_arch::hexagon::v64 | q6_vw_vsatdw_vwvw | function | ||
| 1764 | core::core_arch::hexagon::v64 | q6_vw_vsub_vwvw | function | ||
| 1765 | core::core_arch::hexagon::v64 | q6_vw_vsub_vwvw_sat | function | ||
| 1766 | core::core_arch::hexagon::v64 | q6_w_equals_w | function | ||
| 1767 | core::core_arch::hexagon::v64 | q6_w_vcombine_vv | function | ||
| 1768 | core::core_arch::hexagon::v64 | q6_w_vdeal_vvr | function | ||
| 1769 | core::core_arch::hexagon::v64 | q6_w_vmpye_vwvuh | function | ||
| 1770 | core::core_arch::hexagon::v64 | q6_w_vmpyoacc_wvwvh | function | ||
| 1771 | core::core_arch::hexagon::v64 | q6_w_vshuff_vvr | function | ||
| 1772 | core::core_arch::hexagon::v64 | q6_w_vswap_qvv | function | ||
| 1773 | core::core_arch::hexagon::v64 | q6_w_vzero | function | ||
| 1774 | core::core_arch::hexagon::v64 | q6_wb_vadd_wbwb | function | ||
| 1775 | core::core_arch::hexagon::v64 | q6_wb_vadd_wbwb_sat | function | ||
| 1776 | core::core_arch::hexagon::v64 | q6_wb_vshuffoe_vbvb | function | ||
| 1777 | core::core_arch::hexagon::v64 | q6_wb_vsub_wbwb | function | ||
| 1778 | core::core_arch::hexagon::v64 | q6_wb_vsub_wbwb_sat | function | ||
| 1779 | core::core_arch::hexagon::v64 | q6_wh_vadd_vubvub | function | ||
| 1780 | core::core_arch::hexagon::v64 | q6_wh_vadd_whwh | function | ||
| 1781 | core::core_arch::hexagon::v64 | q6_wh_vadd_whwh_sat | function | ||
| 1782 | core::core_arch::hexagon::v64 | q6_wh_vaddacc_whvubvub | function | ||
| 1783 | core::core_arch::hexagon::v64 | q6_wh_vdmpy_wubrb | function | ||
| 1784 | core::core_arch::hexagon::v64 | q6_wh_vdmpyacc_whwubrb | function | ||
| 1785 | core::core_arch::hexagon::v64 | q6_wh_vlut16_vbvhi | function | ||
| 1786 | core::core_arch::hexagon::v64 | q6_wh_vlut16_vbvhr | function | ||
| 1787 | core::core_arch::hexagon::v64 | q6_wh_vlut16_vbvhr_nomatch | function | ||
| 1788 | core::core_arch::hexagon::v64 | q6_wh_vlut16or_whvbvhi | function | ||
| 1789 | core::core_arch::hexagon::v64 | q6_wh_vlut16or_whvbvhr | function | ||
| 1790 | core::core_arch::hexagon::v64 | q6_wh_vmpa_wubrb | function | ||
| 1791 | core::core_arch::hexagon::v64 | q6_wh_vmpa_wubrub | function | ||
| 1792 | core::core_arch::hexagon::v64 | q6_wh_vmpa_wubwb | function | ||
| 1793 | core::core_arch::hexagon::v64 | q6_wh_vmpa_wubwub | function | ||
| 1794 | core::core_arch::hexagon::v64 | q6_wh_vmpaacc_whwubrb | function | ||
| 1795 | core::core_arch::hexagon::v64 | q6_wh_vmpaacc_whwubrub | function | ||
| 1796 | core::core_arch::hexagon::v64 | q6_wh_vmpy_vbvb | function | ||
| 1797 | core::core_arch::hexagon::v64 | q6_wh_vmpy_vubrb | function | ||
| 1798 | core::core_arch::hexagon::v64 | q6_wh_vmpy_vubvb | function | ||
| 1799 | core::core_arch::hexagon::v64 | q6_wh_vmpyacc_whvbvb | function | ||
| 1800 | core::core_arch::hexagon::v64 | q6_wh_vmpyacc_whvubrb | function | ||
| 1801 | core::core_arch::hexagon::v64 | q6_wh_vmpyacc_whvubvb | function | ||
| 1802 | core::core_arch::hexagon::v64 | q6_wh_vshuffoe_vhvh | function | ||
| 1803 | core::core_arch::hexagon::v64 | q6_wh_vsub_vubvub | function | ||
| 1804 | core::core_arch::hexagon::v64 | q6_wh_vsub_whwh | function | ||
| 1805 | core::core_arch::hexagon::v64 | q6_wh_vsub_whwh_sat | function | ||
| 1806 | core::core_arch::hexagon::v64 | q6_wh_vsxt_vb | function | ||
| 1807 | core::core_arch::hexagon::v64 | q6_wh_vtmpy_wbrb | function | ||
| 1808 | core::core_arch::hexagon::v64 | q6_wh_vtmpy_wubrb | function | ||
| 1809 | core::core_arch::hexagon::v64 | q6_wh_vtmpyacc_whwbrb | function | ||
| 1810 | core::core_arch::hexagon::v64 | q6_wh_vtmpyacc_whwubrb | function | ||
| 1811 | core::core_arch::hexagon::v64 | q6_wh_vunpack_vb | function | ||
| 1812 | core::core_arch::hexagon::v64 | q6_wh_vunpackoor_whvb | function | ||
| 1813 | core::core_arch::hexagon::v64 | q6_whf_vcvt2_vb | function | ||
| 1814 | core::core_arch::hexagon::v64 | q6_whf_vcvt2_vub | function | ||
| 1815 | core::core_arch::hexagon::v64 | q6_whf_vcvt_v | function | ||
| 1816 | core::core_arch::hexagon::v64 | q6_whf_vcvt_vb | function | ||
| 1817 | core::core_arch::hexagon::v64 | q6_whf_vcvt_vub | function | ||
| 1818 | core::core_arch::hexagon::v64 | q6_wqf32_vmpy_vhfvhf | function | ||
| 1819 | core::core_arch::hexagon::v64 | q6_wqf32_vmpy_vqf16vhf | function | ||
| 1820 | core::core_arch::hexagon::v64 | q6_wqf32_vmpy_vqf16vqf16 | function | ||
| 1821 | core::core_arch::hexagon::v64 | q6_wsf_vadd_vhfvhf | function | ||
| 1822 | core::core_arch::hexagon::v64 | q6_wsf_vcvt_vhf | function | ||
| 1823 | core::core_arch::hexagon::v64 | q6_wsf_vmpy_vhfvhf | function | ||
| 1824 | core::core_arch::hexagon::v64 | q6_wsf_vmpyacc_wsfvhfvhf | function | ||
| 1825 | core::core_arch::hexagon::v64 | q6_wsf_vsub_vhfvhf | function | ||
| 1826 | core::core_arch::hexagon::v64 | q6_wub_vadd_wubwub_sat | function | ||
| 1827 | core::core_arch::hexagon::v64 | q6_wub_vsub_wubwub_sat | function | ||
| 1828 | core::core_arch::hexagon::v64 | q6_wuh_vadd_wuhwuh_sat | function | ||
| 1829 | core::core_arch::hexagon::v64 | q6_wuh_vmpy_vubrub | function | ||
| 1830 | core::core_arch::hexagon::v64 | q6_wuh_vmpy_vubvub | function | ||
| 1831 | core::core_arch::hexagon::v64 | q6_wuh_vmpyacc_wuhvubrub | function | ||
| 1832 | core::core_arch::hexagon::v64 | q6_wuh_vmpyacc_wuhvubvub | function | ||
| 1833 | core::core_arch::hexagon::v64 | q6_wuh_vsub_wuhwuh_sat | function | ||
| 1834 | core::core_arch::hexagon::v64 | q6_wuh_vunpack_vub | function | ||
| 1835 | core::core_arch::hexagon::v64 | q6_wuh_vzxt_vub | function | ||
| 1836 | core::core_arch::hexagon::v64 | q6_wuw_vadd_wuwwuw_sat | function | ||
| 1837 | core::core_arch::hexagon::v64 | q6_wuw_vdsad_wuhruh | function | ||
| 1838 | core::core_arch::hexagon::v64 | q6_wuw_vdsadacc_wuwwuhruh | function | ||
| 1839 | core::core_arch::hexagon::v64 | q6_wuw_vmpy_vuhruh | function | ||
| 1840 | core::core_arch::hexagon::v64 | q6_wuw_vmpy_vuhvuh | function | ||
| 1841 | core::core_arch::hexagon::v64 | q6_wuw_vmpyacc_wuwvuhruh | function | ||
| 1842 | core::core_arch::hexagon::v64 | q6_wuw_vmpyacc_wuwvuhvuh | function | ||
| 1843 | core::core_arch::hexagon::v64 | q6_wuw_vrmpy_wubrubi | function | ||
| 1844 | core::core_arch::hexagon::v64 | q6_wuw_vrmpyacc_wuwwubrubi | function | ||
| 1845 | core::core_arch::hexagon::v64 | q6_wuw_vrsad_wubrubi | function | ||
| 1846 | core::core_arch::hexagon::v64 | q6_wuw_vrsadacc_wuwwubrubi | function | ||
| 1847 | core::core_arch::hexagon::v64 | q6_wuw_vsub_wuwwuw_sat | function | ||
| 1848 | core::core_arch::hexagon::v64 | q6_wuw_vunpack_vuh | function | ||
| 1849 | core::core_arch::hexagon::v64 | q6_wuw_vzxt_vuh | function | ||
| 1850 | core::core_arch::hexagon::v64 | q6_ww_v6mpy_wubwbi_h | function | ||
| 1851 | core::core_arch::hexagon::v64 | q6_ww_v6mpy_wubwbi_v | function | ||
| 1852 | core::core_arch::hexagon::v64 | q6_ww_v6mpyacc_wwwubwbi_h | function | ||
| 1853 | core::core_arch::hexagon::v64 | q6_ww_v6mpyacc_wwwubwbi_v | function | ||
| 1854 | core::core_arch::hexagon::v64 | q6_ww_vadd_vhvh | function | ||
| 1855 | core::core_arch::hexagon::v64 | q6_ww_vadd_vuhvuh | function | ||
| 1856 | core::core_arch::hexagon::v64 | q6_ww_vadd_wwww | function | ||
| 1857 | core::core_arch::hexagon::v64 | q6_ww_vadd_wwww_sat | function | ||
| 1858 | core::core_arch::hexagon::v64 | q6_ww_vaddacc_wwvhvh | function | ||
| 1859 | core::core_arch::hexagon::v64 | q6_ww_vaddacc_wwvuhvuh | function | ||
| 1860 | core::core_arch::hexagon::v64 | q6_ww_vasrinto_wwvwvw | function | ||
| 1861 | core::core_arch::hexagon::v64 | q6_ww_vdmpy_whrb | function | ||
| 1862 | core::core_arch::hexagon::v64 | q6_ww_vdmpyacc_wwwhrb | function | ||
| 1863 | core::core_arch::hexagon::v64 | q6_ww_vmpa_whrb | function | ||
| 1864 | core::core_arch::hexagon::v64 | q6_ww_vmpa_wuhrb | function | ||
| 1865 | core::core_arch::hexagon::v64 | q6_ww_vmpaacc_wwwhrb | function | ||
| 1866 | core::core_arch::hexagon::v64 | q6_ww_vmpaacc_wwwuhrb | function | ||
| 1867 | core::core_arch::hexagon::v64 | q6_ww_vmpy_vhrh | function | ||
| 1868 | core::core_arch::hexagon::v64 | q6_ww_vmpy_vhvh | function | ||
| 1869 | core::core_arch::hexagon::v64 | q6_ww_vmpy_vhvuh | function | ||
| 1870 | core::core_arch::hexagon::v64 | q6_ww_vmpyacc_wwvhrh | function | ||
| 1871 | core::core_arch::hexagon::v64 | q6_ww_vmpyacc_wwvhrh_sat | function | ||
| 1872 | core::core_arch::hexagon::v64 | q6_ww_vmpyacc_wwvhvh | function | ||
| 1873 | core::core_arch::hexagon::v64 | q6_ww_vmpyacc_wwvhvuh | function | ||
| 1874 | core::core_arch::hexagon::v64 | q6_ww_vrmpy_wubrbi | function | ||
| 1875 | core::core_arch::hexagon::v64 | q6_ww_vrmpyacc_wwwubrbi | function | ||
| 1876 | core::core_arch::hexagon::v64 | q6_ww_vsub_vhvh | function | ||
| 1877 | core::core_arch::hexagon::v64 | q6_ww_vsub_vuhvuh | function | ||
| 1878 | core::core_arch::hexagon::v64 | q6_ww_vsub_wwww | function | ||
| 1879 | core::core_arch::hexagon::v64 | q6_ww_vsub_wwww_sat | function | ||
| 1880 | core::core_arch::hexagon::v64 | q6_ww_vsxt_vh | function | ||
| 1881 | core::core_arch::hexagon::v64 | q6_ww_vtmpy_whrb | function | ||
| 1882 | core::core_arch::hexagon::v64 | q6_ww_vtmpyacc_wwwhrb | function | ||
| 1883 | core::core_arch::hexagon::v64 | q6_ww_vunpack_vh | function | ||
| 1884 | core::core_arch::hexagon::v64 | q6_ww_vunpackoor_wwvh | function | ||
| 1885 | core::core_arch::loongarch32 | cacop | function | ||
| 1886 | core::core_arch::loongarch32 | csrrd | function | ||
| 1887 | core::core_arch::loongarch32 | csrwr | function | ||
| 1888 | core::core_arch::loongarch32 | csrxchg | function | ||
| 1889 | core::core_arch::loongarch64 | asrtgt | function | ||
| 1890 | core::core_arch::loongarch64 | asrtle | function | ||
| 1891 | core::core_arch::loongarch64 | cacop | function | ||
| 1892 | core::core_arch::loongarch64 | csrrd | function | ||
| 1893 | core::core_arch::loongarch64 | csrwr | function | ||
| 1894 | core::core_arch::loongarch64 | csrxchg | function | ||
| 1895 | core::core_arch::loongarch64 | iocsrrd_d | function | ||
| 1896 | core::core_arch::loongarch64 | iocsrwr_d | function | ||
| 1897 | core::core_arch::loongarch64 | lddir | function | ||
| 1898 | core::core_arch::loongarch64 | ldpte | function | ||
| 1899 | core::core_arch::loongarch64::lasx::generated | lasx_xvld | function | ||
| 1900 | core::core_arch::loongarch64::lasx::generated | lasx_xvldrepl_b | function | ||
| 1901 | core::core_arch::loongarch64::lasx::generated | lasx_xvldrepl_d | function | ||
| 1902 | core::core_arch::loongarch64::lasx::generated | lasx_xvldrepl_h | function | ||
| 1903 | core::core_arch::loongarch64::lasx::generated | lasx_xvldrepl_w | function | ||
| 1904 | core::core_arch::loongarch64::lasx::generated | lasx_xvldx | function | ||
| 1905 | core::core_arch::loongarch64::lasx::generated | lasx_xvst | function | ||
| 1906 | core::core_arch::loongarch64::lasx::generated | lasx_xvstelm_b | function | ||
| 1907 | core::core_arch::loongarch64::lasx::generated | lasx_xvstelm_d | function | ||
| 1908 | core::core_arch::loongarch64::lasx::generated | lasx_xvstelm_h | function | ||
| 1909 | core::core_arch::loongarch64::lasx::generated | lasx_xvstelm_w | function | ||
| 1910 | core::core_arch::loongarch64::lasx::generated | lasx_xvstx | function | ||
| 1911 | core::core_arch::loongarch64::lsx::generated | lsx_vld | function | ||
| 1912 | core::core_arch::loongarch64::lsx::generated | lsx_vldrepl_b | function | ||
| 1913 | core::core_arch::loongarch64::lsx::generated | lsx_vldrepl_d | function | ||
| 1914 | core::core_arch::loongarch64::lsx::generated | lsx_vldrepl_h | function | ||
| 1915 | core::core_arch::loongarch64::lsx::generated | lsx_vldrepl_w | function | ||
| 1916 | core::core_arch::loongarch64::lsx::generated | lsx_vldx | function | ||
| 1917 | core::core_arch::loongarch64::lsx::generated | lsx_vst | function | ||
| 1918 | core::core_arch::loongarch64::lsx::generated | lsx_vstelm_b | function | ||
| 1919 | core::core_arch::loongarch64::lsx::generated | lsx_vstelm_d | function | ||
| 1920 | core::core_arch::loongarch64::lsx::generated | lsx_vstelm_h | function | ||
| 1921 | core::core_arch::loongarch64::lsx::generated | lsx_vstelm_w | function | ||
| 1922 | core::core_arch::loongarch64::lsx::generated | lsx_vstx | function | ||
| 1923 | core::core_arch::loongarch_shared | brk | function | ||
| 1924 | core::core_arch::loongarch_shared | iocsrrd_b | function | ||
| 1925 | core::core_arch::loongarch_shared | iocsrrd_h | function | ||
| 1926 | core::core_arch::loongarch_shared | iocsrrd_w | function | ||
| 1927 | core::core_arch::loongarch_shared | iocsrwr_b | function | ||
| 1928 | core::core_arch::loongarch_shared | iocsrwr_h | function | ||
| 1929 | core::core_arch::loongarch_shared | iocsrwr_w | function | ||
| 1930 | core::core_arch::loongarch_shared | movgr2fcsr | function | ||
| 1931 | core::core_arch::loongarch_shared | syscall | function | ||
| 1932 | core::core_arch::mips | break_ | function | ||
| 1933 | core::core_arch::nvptx | __assert_fail | function | ||
| 1934 | core::core_arch::nvptx | _block_dim_x | function | ||
| 1935 | core::core_arch::nvptx | _block_dim_y | function | ||
| 1936 | core::core_arch::nvptx | _block_dim_z | function | ||
| 1937 | core::core_arch::nvptx | _block_idx_x | function | ||
| 1938 | core::core_arch::nvptx | _block_idx_y | function | ||
| 1939 | core::core_arch::nvptx | _block_idx_z | function | ||
| 1940 | core::core_arch::nvptx | _grid_dim_x | function | ||
| 1941 | core::core_arch::nvptx | _grid_dim_y | function | ||
| 1942 | core::core_arch::nvptx | _grid_dim_z | function | ||
| 1943 | core::core_arch::nvptx | _syncthreads | function | ||
| 1944 | core::core_arch::nvptx | _thread_idx_x | function | ||
| 1945 | core::core_arch::nvptx | _thread_idx_y | function | ||
| 1946 | core::core_arch::nvptx | _thread_idx_z | function | ||
| 1947 | core::core_arch::nvptx | free | function | ||
| 1948 | core::core_arch::nvptx | malloc | function | ||
| 1949 | core::core_arch::nvptx | trap | function | ||
| 1950 | core::core_arch::nvptx | vprintf | function | ||
| 1951 | core::core_arch::nvptx::packed | f16x2_add | function | ||
| 1952 | core::core_arch::nvptx::packed | f16x2_fma | function | ||
| 1953 | core::core_arch::nvptx::packed | f16x2_max | function | ||
| 1954 | core::core_arch::nvptx::packed | f16x2_max_nan | function | ||
| 1955 | core::core_arch::nvptx::packed | f16x2_min | function | ||
| 1956 | core::core_arch::nvptx::packed | f16x2_min_nan | function | ||
| 1957 | core::core_arch::nvptx::packed | f16x2_mul | function | ||
| 1958 | core::core_arch::nvptx::packed | f16x2_neg | function | ||
| 1959 | core::core_arch::nvptx::packed | f16x2_sub | function | ||
| 1960 | core::core_arch::powerpc | trap | function | ||
| 1961 | core::core_arch::powerpc64::vsx | vec_xl_len | function | ||
| 1962 | core::core_arch::powerpc64::vsx | vec_xst_len | function | ||
| 1963 | core::core_arch::powerpc::altivec | vec_abs | function | ||
| 1964 | core::core_arch::powerpc::altivec | vec_abss | function | ||
| 1965 | core::core_arch::powerpc::altivec | vec_add | function | ||
| 1966 | core::core_arch::powerpc::altivec | vec_addc | function | ||
| 1967 | core::core_arch::powerpc::altivec | vec_adde | function | ||
| 1968 | core::core_arch::powerpc::altivec | vec_adds | function | ||
| 1969 | core::core_arch::powerpc::altivec | vec_all_eq | function | ||
| 1970 | core::core_arch::powerpc::altivec | vec_all_ge | function | ||
| 1971 | core::core_arch::powerpc::altivec | vec_all_gt | function | ||
| 1972 | core::core_arch::powerpc::altivec | vec_all_in | function | ||
| 1973 | core::core_arch::powerpc::altivec | vec_all_le | function | ||
| 1974 | core::core_arch::powerpc::altivec | vec_all_lt | function | ||
| 1975 | core::core_arch::powerpc::altivec | vec_all_nan | function | ||
| 1976 | core::core_arch::powerpc::altivec | vec_all_ne | function | ||
| 1977 | core::core_arch::powerpc::altivec | vec_all_nge | function | ||
| 1978 | core::core_arch::powerpc::altivec | vec_all_ngt | function | ||
| 1979 | core::core_arch::powerpc::altivec | vec_all_nle | function | ||
| 1980 | core::core_arch::powerpc::altivec | vec_all_nlt | function | ||
| 1981 | core::core_arch::powerpc::altivec | vec_all_numeric | function | ||
| 1982 | core::core_arch::powerpc::altivec | vec_and | function | ||
| 1983 | core::core_arch::powerpc::altivec | vec_andc | function | ||
| 1984 | core::core_arch::powerpc::altivec | vec_any_eq | function | ||
| 1985 | core::core_arch::powerpc::altivec | vec_any_ge | function | ||
| 1986 | core::core_arch::powerpc::altivec | vec_any_gt | function | ||
| 1987 | core::core_arch::powerpc::altivec | vec_any_le | function | ||
| 1988 | core::core_arch::powerpc::altivec | vec_any_lt | function | ||
| 1989 | core::core_arch::powerpc::altivec | vec_any_nan | function | ||
| 1990 | core::core_arch::powerpc::altivec | vec_any_ne | function | ||
| 1991 | core::core_arch::powerpc::altivec | vec_any_nge | function | ||
| 1992 | core::core_arch::powerpc::altivec | vec_any_ngt | function | ||
| 1993 | core::core_arch::powerpc::altivec | vec_any_nle | function | ||
| 1994 | core::core_arch::powerpc::altivec | vec_any_nlt | function | ||
| 1995 | core::core_arch::powerpc::altivec | vec_any_numeric | function | ||
| 1996 | core::core_arch::powerpc::altivec | vec_any_out | function | ||
| 1997 | core::core_arch::powerpc::altivec | vec_avg | function | ||
| 1998 | core::core_arch::powerpc::altivec | vec_ceil | function | ||
| 1999 | core::core_arch::powerpc::altivec | vec_cmpb | function | ||
| 2000 | core::core_arch::powerpc::altivec | vec_cmpeq | function | ||
| 2001 | core::core_arch::powerpc::altivec | vec_cmpge | function | ||
| 2002 | core::core_arch::powerpc::altivec | vec_cmpgt | function | ||
| 2003 | core::core_arch::powerpc::altivec | vec_cmple | function | ||
| 2004 | core::core_arch::powerpc::altivec | vec_cmplt | function | ||
| 2005 | core::core_arch::powerpc::altivec | vec_cmpne | function | ||
| 2006 | core::core_arch::powerpc::altivec | vec_cntlz | function | ||
| 2007 | core::core_arch::powerpc::altivec | vec_ctf | function | ||
| 2008 | core::core_arch::powerpc::altivec | vec_cts | function | ||
| 2009 | core::core_arch::powerpc::altivec | vec_ctu | function | ||
| 2010 | core::core_arch::powerpc::altivec | vec_expte | function | ||
| 2011 | core::core_arch::powerpc::altivec | vec_extract | function | ||
| 2012 | core::core_arch::powerpc::altivec | vec_floor | function | ||
| 2013 | core::core_arch::powerpc::altivec | vec_insert | function | ||
| 2014 | core::core_arch::powerpc::altivec | vec_ld | function | ||
| 2015 | core::core_arch::powerpc::altivec | vec_lde | function | ||
| 2016 | core::core_arch::powerpc::altivec | vec_ldl | function | ||
| 2017 | core::core_arch::powerpc::altivec | vec_loge | function | ||
| 2018 | core::core_arch::powerpc::altivec | vec_madd | function | ||
| 2019 | core::core_arch::powerpc::altivec | vec_madds | function | ||
| 2020 | core::core_arch::powerpc::altivec | vec_max | function | ||
| 2021 | core::core_arch::powerpc::altivec | vec_mergeh | function | ||
| 2022 | core::core_arch::powerpc::altivec | vec_mergel | function | ||
| 2023 | core::core_arch::powerpc::altivec | vec_mfvscr | function | ||
| 2024 | core::core_arch::powerpc::altivec | vec_min | function | ||
| 2025 | core::core_arch::powerpc::altivec | vec_mladd | function | ||
| 2026 | core::core_arch::powerpc::altivec | vec_mradds | function | ||
| 2027 | core::core_arch::powerpc::altivec | vec_msum | function | ||
| 2028 | core::core_arch::powerpc::altivec | vec_msums | function | ||
| 2029 | core::core_arch::powerpc::altivec | vec_mul | function | ||
| 2030 | core::core_arch::powerpc::altivec | vec_nand | function | ||
| 2031 | core::core_arch::powerpc::altivec | vec_neg | function | ||
| 2032 | core::core_arch::powerpc::altivec | vec_nmsub | function | ||
| 2033 | core::core_arch::powerpc::altivec | vec_nor | function | ||
| 2034 | core::core_arch::powerpc::altivec | vec_or | function | ||
| 2035 | core::core_arch::powerpc::altivec | vec_orc | function | ||
| 2036 | core::core_arch::powerpc::altivec | vec_pack | function | ||
| 2037 | core::core_arch::powerpc::altivec | vec_packs | function | ||
| 2038 | core::core_arch::powerpc::altivec | vec_packsu | function | ||
| 2039 | core::core_arch::powerpc::altivec | vec_rl | function | ||
| 2040 | core::core_arch::powerpc::altivec | vec_round | function | ||
| 2041 | core::core_arch::powerpc::altivec | vec_sel | function | ||
| 2042 | core::core_arch::powerpc::altivec | vec_sl | function | ||
| 2043 | core::core_arch::powerpc::altivec | vec_sld | function | ||
| 2044 | core::core_arch::powerpc::altivec | vec_sldw | function | ||
| 2045 | core::core_arch::powerpc::altivec | vec_sll | function | ||
| 2046 | core::core_arch::powerpc::altivec | vec_slo | function | ||
| 2047 | core::core_arch::powerpc::altivec | vec_slv | function | ||
| 2048 | core::core_arch::powerpc::altivec | vec_splat | function | ||
| 2049 | core::core_arch::powerpc::altivec | vec_splat_s16 | function | ||
| 2050 | core::core_arch::powerpc::altivec | vec_splat_s32 | function | ||
| 2051 | core::core_arch::powerpc::altivec | vec_splat_s8 | function | ||
| 2052 | core::core_arch::powerpc::altivec | vec_splat_u16 | function | ||
| 2053 | core::core_arch::powerpc::altivec | vec_splat_u32 | function | ||
| 2054 | core::core_arch::powerpc::altivec | vec_splat_u8 | function | ||
| 2055 | core::core_arch::powerpc::altivec | vec_splats | function | ||
| 2056 | core::core_arch::powerpc::altivec | vec_sr | function | ||
| 2057 | core::core_arch::powerpc::altivec | vec_sra | function | ||
| 2058 | core::core_arch::powerpc::altivec | vec_srl | function | ||
| 2059 | core::core_arch::powerpc::altivec | vec_sro | function | ||
| 2060 | core::core_arch::powerpc::altivec | vec_srv | function | ||
| 2061 | core::core_arch::powerpc::altivec | vec_st | function | ||
| 2062 | core::core_arch::powerpc::altivec | vec_ste | function | ||
| 2063 | core::core_arch::powerpc::altivec | vec_stl | function | ||
| 2064 | core::core_arch::powerpc::altivec | vec_sub | function | ||
| 2065 | core::core_arch::powerpc::altivec | vec_subc | function | ||
| 2066 | core::core_arch::powerpc::altivec | vec_subs | function | ||
| 2067 | core::core_arch::powerpc::altivec | vec_sum4s | function | ||
| 2068 | core::core_arch::powerpc::altivec | vec_unpackh | function | ||
| 2069 | core::core_arch::powerpc::altivec | vec_unpackl | function | ||
| 2070 | core::core_arch::powerpc::altivec | vec_xl | function | ||
| 2071 | core::core_arch::powerpc::altivec | vec_xor | function | ||
| 2072 | core::core_arch::powerpc::altivec | vec_xst | function | ||
| 2073 | core::core_arch::powerpc::altivec::endian | vec_mule | function | ||
| 2074 | core::core_arch::powerpc::altivec::endian | vec_mulo | function | ||
| 2075 | core::core_arch::powerpc::altivec::endian | vec_perm | function | ||
| 2076 | core::core_arch::powerpc::altivec::endian | vec_sum2s | function | ||
| 2077 | core::core_arch::powerpc::vsx | vec_mergee | function | ||
| 2078 | core::core_arch::powerpc::vsx | vec_mergeo | function | ||
| 2079 | core::core_arch::powerpc::vsx | vec_xxpermdi | function | ||
| 2080 | core::core_arch::riscv64 | hlv_d | function | ||
| 2081 | core::core_arch::riscv64 | hlv_wu | function | ||
| 2082 | core::core_arch::riscv64 | hsv_d | function | ||
| 2083 | core::core_arch::riscv_shared | fence_i | function | ||
| 2084 | core::core_arch::riscv_shared | hfence_gvma | function | ||
| 2085 | core::core_arch::riscv_shared | hfence_gvma_all | function | ||
| 2086 | core::core_arch::riscv_shared | hfence_gvma_gaddr | function | ||
| 2087 | core::core_arch::riscv_shared | hfence_gvma_vmid | function | ||
| 2088 | core::core_arch::riscv_shared | hfence_vvma | function | ||
| 2089 | core::core_arch::riscv_shared | hfence_vvma_all | function | ||
| 2090 | core::core_arch::riscv_shared | hfence_vvma_asid | function | ||
| 2091 | core::core_arch::riscv_shared | hfence_vvma_vaddr | function | ||
| 2092 | core::core_arch::riscv_shared | hinval_gvma | function | ||
| 2093 | core::core_arch::riscv_shared | hinval_gvma_all | function | ||
| 2094 | core::core_arch::riscv_shared | hinval_gvma_gaddr | function | ||
| 2095 | core::core_arch::riscv_shared | hinval_gvma_vmid | function | ||
| 2096 | core::core_arch::riscv_shared | hinval_vvma | function | ||
| 2097 | core::core_arch::riscv_shared | hinval_vvma_all | function | ||
| 2098 | core::core_arch::riscv_shared | hinval_vvma_asid | function | ||
| 2099 | core::core_arch::riscv_shared | hinval_vvma_vaddr | function | ||
| 2100 | core::core_arch::riscv_shared | hlv_b | function | ||
| 2101 | core::core_arch::riscv_shared | hlv_bu | function | ||
| 2102 | core::core_arch::riscv_shared | hlv_h | function | ||
| 2103 | core::core_arch::riscv_shared | hlv_hu | function | ||
| 2104 | core::core_arch::riscv_shared | hlv_w | function | ||
| 2105 | core::core_arch::riscv_shared | hlvx_hu | function | ||
| 2106 | core::core_arch::riscv_shared | hlvx_wu | function | ||
| 2107 | core::core_arch::riscv_shared | hsv_b | function | ||
| 2108 | core::core_arch::riscv_shared | hsv_h | function | ||
| 2109 | core::core_arch::riscv_shared | hsv_w | function | ||
| 2110 | core::core_arch::riscv_shared | sfence_inval_ir | function | ||
| 2111 | core::core_arch::riscv_shared | sfence_vma | function | ||
| 2112 | core::core_arch::riscv_shared | sfence_vma_all | function | ||
| 2113 | core::core_arch::riscv_shared | sfence_vma_asid | function | ||
| 2114 | core::core_arch::riscv_shared | sfence_vma_vaddr | function | ||
| 2115 | core::core_arch::riscv_shared | sfence_w_inval | function | ||
| 2116 | core::core_arch::riscv_shared | sinval_vma | function | ||
| 2117 | core::core_arch::riscv_shared | sinval_vma_all | function | ||
| 2118 | core::core_arch::riscv_shared | sinval_vma_asid | function | ||
| 2119 | core::core_arch::riscv_shared | sinval_vma_vaddr | function | ||
| 2120 | core::core_arch::riscv_shared | wfi | function | ||
| 2121 | core::core_arch::s390x::vector | vec_abs | function | ||
| 2122 | core::core_arch::s390x::vector | vec_add | function | ||
| 2123 | core::core_arch::s390x::vector | vec_add_u128 | function | ||
| 2124 | core::core_arch::s390x::vector | vec_addc_u128 | function | ||
| 2125 | core::core_arch::s390x::vector | vec_adde_u128 | function | ||
| 2126 | core::core_arch::s390x::vector | vec_addec_u128 | function | ||
| 2127 | core::core_arch::s390x::vector | vec_all_eq | function | ||
| 2128 | core::core_arch::s390x::vector | vec_all_ge | function | ||
| 2129 | core::core_arch::s390x::vector | vec_all_gt | function | ||
| 2130 | core::core_arch::s390x::vector | vec_all_le | function | ||
| 2131 | core::core_arch::s390x::vector | vec_all_lt | function | ||
| 2132 | core::core_arch::s390x::vector | vec_all_nan | function | ||
| 2133 | core::core_arch::s390x::vector | vec_all_ne | function | ||
| 2134 | core::core_arch::s390x::vector | vec_all_nge | function | ||
| 2135 | core::core_arch::s390x::vector | vec_all_ngt | function | ||
| 2136 | core::core_arch::s390x::vector | vec_all_nle | function | ||
| 2137 | core::core_arch::s390x::vector | vec_all_nlt | function | ||
| 2138 | core::core_arch::s390x::vector | vec_all_numeric | function | ||
| 2139 | core::core_arch::s390x::vector | vec_and | function | ||
| 2140 | core::core_arch::s390x::vector | vec_andc | function | ||
| 2141 | core::core_arch::s390x::vector | vec_any_eq | function | ||
| 2142 | core::core_arch::s390x::vector | vec_any_ge | function | ||
| 2143 | core::core_arch::s390x::vector | vec_any_gt | function | ||
| 2144 | core::core_arch::s390x::vector | vec_any_le | function | ||
| 2145 | core::core_arch::s390x::vector | vec_any_lt | function | ||
| 2146 | core::core_arch::s390x::vector | vec_any_nan | function | ||
| 2147 | core::core_arch::s390x::vector | vec_any_ne | function | ||
| 2148 | core::core_arch::s390x::vector | vec_any_nge | function | ||
| 2149 | core::core_arch::s390x::vector | vec_any_ngt | function | ||
| 2150 | core::core_arch::s390x::vector | vec_any_nle | function | ||
| 2151 | core::core_arch::s390x::vector | vec_any_nlt | function | ||
| 2152 | core::core_arch::s390x::vector | vec_any_numeric | function | ||
| 2153 | core::core_arch::s390x::vector | vec_avg | function | ||
| 2154 | core::core_arch::s390x::vector | vec_bperm_u128 | function | ||
| 2155 | core::core_arch::s390x::vector | vec_ceil | function | ||
| 2156 | core::core_arch::s390x::vector | vec_checksum | function | ||
| 2157 | core::core_arch::s390x::vector | vec_cmpeq | function | ||
| 2158 | core::core_arch::s390x::vector | vec_cmpeq_idx | function | ||
| 2159 | core::core_arch::s390x::vector | vec_cmpeq_idx_cc | function | ||
| 2160 | core::core_arch::s390x::vector | vec_cmpeq_or_0_idx | function | ||
| 2161 | core::core_arch::s390x::vector | vec_cmpeq_or_0_idx_cc | function | ||
| 2162 | core::core_arch::s390x::vector | vec_cmpge | function | ||
| 2163 | core::core_arch::s390x::vector | vec_cmpgt | function | ||
| 2164 | core::core_arch::s390x::vector | vec_cmple | function | ||
| 2165 | core::core_arch::s390x::vector | vec_cmplt | function | ||
| 2166 | core::core_arch::s390x::vector | vec_cmpne | function | ||
| 2167 | core::core_arch::s390x::vector | vec_cmpne_idx | function | ||
| 2168 | core::core_arch::s390x::vector | vec_cmpne_idx_cc | function | ||
| 2169 | core::core_arch::s390x::vector | vec_cmpne_or_0_idx | function | ||
| 2170 | core::core_arch::s390x::vector | vec_cmpne_or_0_idx_cc | function | ||
| 2171 | core::core_arch::s390x::vector | vec_cmpnrg | function | ||
| 2172 | core::core_arch::s390x::vector | vec_cmpnrg_cc | function | ||
| 2173 | core::core_arch::s390x::vector | vec_cmpnrg_idx | function | ||
| 2174 | core::core_arch::s390x::vector | vec_cmpnrg_idx_cc | function | ||
| 2175 | core::core_arch::s390x::vector | vec_cmpnrg_or_0_idx | function | ||
| 2176 | core::core_arch::s390x::vector | vec_cmpnrg_or_0_idx_cc | function | ||
| 2177 | core::core_arch::s390x::vector | vec_cmprg | function | ||
| 2178 | core::core_arch::s390x::vector | vec_cmprg_cc | function | ||
| 2179 | core::core_arch::s390x::vector | vec_cmprg_idx | function | ||
| 2180 | core::core_arch::s390x::vector | vec_cmprg_idx_cc | function | ||
| 2181 | core::core_arch::s390x::vector | vec_cmprg_or_0_idx | function | ||
| 2182 | core::core_arch::s390x::vector | vec_cmprg_or_0_idx_cc | function | ||
| 2183 | core::core_arch::s390x::vector | vec_cntlz | function | ||
| 2184 | core::core_arch::s390x::vector | vec_cnttz | function | ||
| 2185 | core::core_arch::s390x::vector | vec_convert_from_fp16 | function | ||
| 2186 | core::core_arch::s390x::vector | vec_convert_to_fp16 | function | ||
| 2187 | core::core_arch::s390x::vector | vec_cp_until_zero | function | ||
| 2188 | core::core_arch::s390x::vector | vec_cp_until_zero_cc | function | ||
| 2189 | core::core_arch::s390x::vector | vec_double | function | ||
| 2190 | core::core_arch::s390x::vector | vec_doublee | function | ||
| 2191 | core::core_arch::s390x::vector | vec_eqv | function | ||
| 2192 | core::core_arch::s390x::vector | vec_extend_s64 | function | ||
| 2193 | core::core_arch::s390x::vector | vec_extend_to_fp32_hi | function | ||
| 2194 | core::core_arch::s390x::vector | vec_extend_to_fp32_lo | function | ||
| 2195 | core::core_arch::s390x::vector | vec_extract | function | ||
| 2196 | core::core_arch::s390x::vector | vec_find_any_eq | function | ||
| 2197 | core::core_arch::s390x::vector | vec_find_any_eq_cc | function | ||
| 2198 | core::core_arch::s390x::vector | vec_find_any_eq_idx | function | ||
| 2199 | core::core_arch::s390x::vector | vec_find_any_eq_idx_cc | function | ||
| 2200 | core::core_arch::s390x::vector | vec_find_any_eq_or_0_idx | function | ||
| 2201 | core::core_arch::s390x::vector | vec_find_any_eq_or_0_idx_cc | function | ||
| 2202 | core::core_arch::s390x::vector | vec_find_any_ne | function | ||
| 2203 | core::core_arch::s390x::vector | vec_find_any_ne_cc | function | ||
| 2204 | core::core_arch::s390x::vector | vec_find_any_ne_idx | function | ||
| 2205 | core::core_arch::s390x::vector | vec_find_any_ne_idx_cc | function | ||
| 2206 | core::core_arch::s390x::vector | vec_find_any_ne_or_0_idx | function | ||
| 2207 | core::core_arch::s390x::vector | vec_find_any_ne_or_0_idx_cc | function | ||
| 2208 | core::core_arch::s390x::vector | vec_float | function | ||
| 2209 | core::core_arch::s390x::vector | vec_floate | function | ||
| 2210 | core::core_arch::s390x::vector | vec_floor | function | ||
| 2211 | core::core_arch::s390x::vector | vec_fp_test_data_class | function | ||
| 2212 | core::core_arch::s390x::vector | vec_gather_element | function | ||
| 2213 | core::core_arch::s390x::vector | vec_genmask | function | ||
| 2214 | core::core_arch::s390x::vector | vec_genmasks_16 | function | ||
| 2215 | core::core_arch::s390x::vector | vec_genmasks_32 | function | ||
| 2216 | core::core_arch::s390x::vector | vec_genmasks_64 | function | ||
| 2217 | core::core_arch::s390x::vector | vec_genmasks_8 | function | ||
| 2218 | core::core_arch::s390x::vector | vec_gfmsum | function | ||
| 2219 | core::core_arch::s390x::vector | vec_gfmsum_128 | function | ||
| 2220 | core::core_arch::s390x::vector | vec_gfmsum_accum | function | ||
| 2221 | core::core_arch::s390x::vector | vec_gfmsum_accum_128 | function | ||
| 2222 | core::core_arch::s390x::vector | vec_insert | function | ||
| 2223 | core::core_arch::s390x::vector | vec_insert_and_zero | function | ||
| 2224 | core::core_arch::s390x::vector | vec_load_bndry | function | ||
| 2225 | core::core_arch::s390x::vector | vec_load_len | function | ||
| 2226 | core::core_arch::s390x::vector | vec_load_len_r | function | ||
| 2227 | core::core_arch::s390x::vector | vec_load_pair | function | ||
| 2228 | core::core_arch::s390x::vector | vec_madd | function | ||
| 2229 | core::core_arch::s390x::vector | vec_max | function | ||
| 2230 | core::core_arch::s390x::vector | vec_meadd | function | ||
| 2231 | core::core_arch::s390x::vector | vec_mergeh | function | ||
| 2232 | core::core_arch::s390x::vector | vec_mergel | function | ||
| 2233 | core::core_arch::s390x::vector | vec_mhadd | function | ||
| 2234 | core::core_arch::s390x::vector | vec_min | function | ||
| 2235 | core::core_arch::s390x::vector | vec_mladd | function | ||
| 2236 | core::core_arch::s390x::vector | vec_moadd | function | ||
| 2237 | core::core_arch::s390x::vector | vec_msub | function | ||
| 2238 | core::core_arch::s390x::vector | vec_msum_u128 | function | ||
| 2239 | core::core_arch::s390x::vector | vec_mul | function | ||
| 2240 | core::core_arch::s390x::vector | vec_mule | function | ||
| 2241 | core::core_arch::s390x::vector | vec_mulh | function | ||
| 2242 | core::core_arch::s390x::vector | vec_mulo | function | ||
| 2243 | core::core_arch::s390x::vector | vec_nabs | function | ||
| 2244 | core::core_arch::s390x::vector | vec_nand | function | ||
| 2245 | core::core_arch::s390x::vector | vec_neg | function | ||
| 2246 | core::core_arch::s390x::vector | vec_nmadd | function | ||
| 2247 | core::core_arch::s390x::vector | vec_nmsub | function | ||
| 2248 | core::core_arch::s390x::vector | vec_nor | function | ||
| 2249 | core::core_arch::s390x::vector | vec_or | function | ||
| 2250 | core::core_arch::s390x::vector | vec_orc | function | ||
| 2251 | core::core_arch::s390x::vector | vec_pack | function | ||
| 2252 | core::core_arch::s390x::vector | vec_packs | function | ||
| 2253 | core::core_arch::s390x::vector | vec_packs_cc | function | ||
| 2254 | core::core_arch::s390x::vector | vec_packsu | function | ||
| 2255 | core::core_arch::s390x::vector | vec_packsu_cc | function | ||
| 2256 | core::core_arch::s390x::vector | vec_perm | function | ||
| 2257 | core::core_arch::s390x::vector | vec_popcnt | function | ||
| 2258 | core::core_arch::s390x::vector | vec_promote | function | ||
| 2259 | core::core_arch::s390x::vector | vec_revb | function | ||
| 2260 | core::core_arch::s390x::vector | vec_reve | function | ||
| 2261 | core::core_arch::s390x::vector | vec_rint | function | ||
| 2262 | core::core_arch::s390x::vector | vec_rl | function | ||
| 2263 | core::core_arch::s390x::vector | vec_rli | function | ||
| 2264 | core::core_arch::s390x::vector | vec_round | function | ||
| 2265 | core::core_arch::s390x::vector | vec_round_from_fp32 | function | ||
| 2266 | core::core_arch::s390x::vector | vec_roundc | function | ||
| 2267 | core::core_arch::s390x::vector | vec_roundm | function | ||
| 2268 | core::core_arch::s390x::vector | vec_roundp | function | ||
| 2269 | core::core_arch::s390x::vector | vec_roundz | function | ||
| 2270 | core::core_arch::s390x::vector | vec_search_string_cc | function | ||
| 2271 | core::core_arch::s390x::vector | vec_search_string_until_zero_cc | function | ||
| 2272 | core::core_arch::s390x::vector | vec_sel | function | ||
| 2273 | core::core_arch::s390x::vector | vec_signed | function | ||
| 2274 | core::core_arch::s390x::vector | vec_sl | function | ||
| 2275 | core::core_arch::s390x::vector | vec_slb | function | ||
| 2276 | core::core_arch::s390x::vector | vec_sld | function | ||
| 2277 | core::core_arch::s390x::vector | vec_sldb | function | ||
| 2278 | core::core_arch::s390x::vector | vec_sldw | function | ||
| 2279 | core::core_arch::s390x::vector | vec_sll | function | ||
| 2280 | core::core_arch::s390x::vector | vec_splat | function | ||
| 2281 | core::core_arch::s390x::vector | vec_splat_s16 | function | ||
| 2282 | core::core_arch::s390x::vector | vec_splat_s32 | function | ||
| 2283 | core::core_arch::s390x::vector | vec_splat_s64 | function | ||
| 2284 | core::core_arch::s390x::vector | vec_splat_s8 | function | ||
| 2285 | core::core_arch::s390x::vector | vec_splat_u16 | function | ||
| 2286 | core::core_arch::s390x::vector | vec_splat_u32 | function | ||
| 2287 | core::core_arch::s390x::vector | vec_splat_u64 | function | ||
| 2288 | core::core_arch::s390x::vector | vec_splat_u8 | function | ||
| 2289 | core::core_arch::s390x::vector | vec_splats | function | ||
| 2290 | core::core_arch::s390x::vector | vec_sqrt | function | ||
| 2291 | core::core_arch::s390x::vector | vec_sr | function | ||
| 2292 | core::core_arch::s390x::vector | vec_sra | function | ||
| 2293 | core::core_arch::s390x::vector | vec_srab | function | ||
| 2294 | core::core_arch::s390x::vector | vec_sral | function | ||
| 2295 | core::core_arch::s390x::vector | vec_srb | function | ||
| 2296 | core::core_arch::s390x::vector | vec_srdb | function | ||
| 2297 | core::core_arch::s390x::vector | vec_srl | function | ||
| 2298 | core::core_arch::s390x::vector | vec_store_len | function | ||
| 2299 | core::core_arch::s390x::vector | vec_store_len_r | function | ||
| 2300 | core::core_arch::s390x::vector | vec_sub | function | ||
| 2301 | core::core_arch::s390x::vector | vec_sub_u128 | function | ||
| 2302 | core::core_arch::s390x::vector | vec_subc | function | ||
| 2303 | core::core_arch::s390x::vector | vec_subc_u128 | function | ||
| 2304 | core::core_arch::s390x::vector | vec_sube_u128 | function | ||
| 2305 | core::core_arch::s390x::vector | vec_subec_u128 | function | ||
| 2306 | core::core_arch::s390x::vector | vec_sum2 | function | ||
| 2307 | core::core_arch::s390x::vector | vec_sum4 | function | ||
| 2308 | core::core_arch::s390x::vector | vec_sum_u128 | function | ||
| 2309 | core::core_arch::s390x::vector | vec_test_mask | function | ||
| 2310 | core::core_arch::s390x::vector | vec_trunc | function | ||
| 2311 | core::core_arch::s390x::vector | vec_unpackh | function | ||
| 2312 | core::core_arch::s390x::vector | vec_unpackl | function | ||
| 2313 | core::core_arch::s390x::vector | vec_unsigned | function | ||
| 2314 | core::core_arch::s390x::vector | vec_xl | function | ||
| 2315 | core::core_arch::s390x::vector | vec_xor | function | ||
| 2316 | core::core_arch::s390x::vector | vec_xst | function | ||
| 2317 | core::core_arch::wasm32::atomic | memory_atomic_notify | function | ||
| 2318 | core::core_arch::wasm32::atomic | memory_atomic_wait32 | function | ||
| 2319 | core::core_arch::wasm32::atomic | memory_atomic_wait64 | function | ||
| 2320 | core::core_arch::wasm32::simd128 | i16x8_load_extend_i8x8 | function | ||
| 2321 | core::core_arch::wasm32::simd128 | i16x8_load_extend_u8x8 | function | ||
| 2322 | core::core_arch::wasm32::simd128 | i32x4_load_extend_i16x4 | function | ||
| 2323 | core::core_arch::wasm32::simd128 | i32x4_load_extend_u16x4 | function | ||
| 2324 | core::core_arch::wasm32::simd128 | i64x2_load_extend_i32x2 | function | ||
| 2325 | core::core_arch::wasm32::simd128 | i64x2_load_extend_u32x2 | function | ||
| 2326 | core::core_arch::wasm32::simd128 | v128_load | function | ||
| 2327 | core::core_arch::wasm32::simd128 | v128_load16_lane | function | ||
| 2328 | core::core_arch::wasm32::simd128 | v128_load16_splat | function | ||
| 2329 | core::core_arch::wasm32::simd128 | v128_load32_lane | function | ||
| 2330 | core::core_arch::wasm32::simd128 | v128_load32_splat | function | ||
| 2331 | core::core_arch::wasm32::simd128 | v128_load32_zero | function | ||
| 2332 | core::core_arch::wasm32::simd128 | v128_load64_lane | function | ||
| 2333 | core::core_arch::wasm32::simd128 | v128_load64_splat | function | ||
| 2334 | core::core_arch::wasm32::simd128 | v128_load64_zero | function | ||
| 2335 | core::core_arch::wasm32::simd128 | v128_load8_lane | function | ||
| 2336 | core::core_arch::wasm32::simd128 | v128_load8_splat | function | ||
| 2337 | core::core_arch::wasm32::simd128 | v128_store | function | ||
| 2338 | core::core_arch::wasm32::simd128 | v128_store16_lane | function | ||
| 2339 | core::core_arch::wasm32::simd128 | v128_store32_lane | function | ||
| 2340 | core::core_arch::wasm32::simd128 | v128_store64_lane | function | ||
| 2341 | core::core_arch::wasm32::simd128 | v128_store8_lane | function | ||
| 2342 | core::core_arch::x86::avx | _mm256_lddqu_si256 | function | ||
| 2343 | core::core_arch::x86::avx | _mm256_load_pd | function | ||
| 2344 | core::core_arch::x86::avx | _mm256_load_ps | function | ||
| 2345 | core::core_arch::x86::avx | _mm256_load_si256 | function | ||
| 2346 | core::core_arch::x86::avx | _mm256_loadu2_m128 | function | ||
| 2347 | core::core_arch::x86::avx | _mm256_loadu2_m128d | function | ||
| 2348 | core::core_arch::x86::avx | _mm256_loadu2_m128i | function | ||
| 2349 | core::core_arch::x86::avx | _mm256_loadu_pd | function | ||
| 2350 | core::core_arch::x86::avx | _mm256_loadu_ps | function | ||
| 2351 | core::core_arch::x86::avx | _mm256_loadu_si256 | function | ||
| 2352 | core::core_arch::x86::avx | _mm256_maskload_pd | function | ||
| 2353 | core::core_arch::x86::avx | _mm256_maskload_ps | function | ||
| 2354 | core::core_arch::x86::avx | _mm256_maskstore_pd | function | ||
| 2355 | core::core_arch::x86::avx | _mm256_maskstore_ps | function | ||
| 2356 | core::core_arch::x86::avx | _mm256_store_pd | function | ||
| 2357 | core::core_arch::x86::avx | _mm256_store_ps | function | ||
| 2358 | core::core_arch::x86::avx | _mm256_store_si256 | function | ||
| 2359 | core::core_arch::x86::avx | _mm256_storeu2_m128 | function | ||
| 2360 | core::core_arch::x86::avx | _mm256_storeu2_m128d | function | ||
| 2361 | core::core_arch::x86::avx | _mm256_storeu2_m128i | function | ||
| 2362 | core::core_arch::x86::avx | _mm256_storeu_pd | function | ||
| 2363 | core::core_arch::x86::avx | _mm256_storeu_ps | function | ||
| 2364 | core::core_arch::x86::avx | _mm256_storeu_si256 | function | ||
| 2365 | core::core_arch::x86::avx | _mm256_stream_pd | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2366 | core::core_arch::x86::avx | _mm256_stream_ps | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2367 | core::core_arch::x86::avx | _mm256_stream_si256 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2368 | core::core_arch::x86::avx | _mm_maskload_pd | function | ||
| 2369 | core::core_arch::x86::avx | _mm_maskload_ps | function | ||
| 2370 | core::core_arch::x86::avx | _mm_maskstore_pd | function | ||
| 2371 | core::core_arch::x86::avx | _mm_maskstore_ps | function | ||
| 2372 | core::core_arch::x86::avx2 | _mm256_i32gather_epi32 | function | ||
| 2373 | core::core_arch::x86::avx2 | _mm256_i32gather_epi64 | function | ||
| 2374 | core::core_arch::x86::avx2 | _mm256_i32gather_pd | function | ||
| 2375 | core::core_arch::x86::avx2 | _mm256_i32gather_ps | function | ||
| 2376 | core::core_arch::x86::avx2 | _mm256_i64gather_epi32 | function | ||
| 2377 | core::core_arch::x86::avx2 | _mm256_i64gather_epi64 | function | ||
| 2378 | core::core_arch::x86::avx2 | _mm256_i64gather_pd | function | ||
| 2379 | core::core_arch::x86::avx2 | _mm256_i64gather_ps | function | ||
| 2380 | core::core_arch::x86::avx2 | _mm256_mask_i32gather_epi32 | function | ||
| 2381 | core::core_arch::x86::avx2 | _mm256_mask_i32gather_epi64 | function | ||
| 2382 | core::core_arch::x86::avx2 | _mm256_mask_i32gather_pd | function | ||
| 2383 | core::core_arch::x86::avx2 | _mm256_mask_i32gather_ps | function | ||
| 2384 | core::core_arch::x86::avx2 | _mm256_mask_i64gather_epi32 | function | ||
| 2385 | core::core_arch::x86::avx2 | _mm256_mask_i64gather_epi64 | function | ||
| 2386 | core::core_arch::x86::avx2 | _mm256_mask_i64gather_pd | function | ||
| 2387 | core::core_arch::x86::avx2 | _mm256_mask_i64gather_ps | function | ||
| 2388 | core::core_arch::x86::avx2 | _mm256_maskload_epi32 | function | ||
| 2389 | core::core_arch::x86::avx2 | _mm256_maskload_epi64 | function | ||
| 2390 | core::core_arch::x86::avx2 | _mm256_maskstore_epi32 | function | ||
| 2391 | core::core_arch::x86::avx2 | _mm256_maskstore_epi64 | function | ||
| 2392 | core::core_arch::x86::avx2 | _mm256_stream_load_si256 | function | ||
| 2393 | core::core_arch::x86::avx2 | _mm_i32gather_epi32 | function | ||
| 2394 | core::core_arch::x86::avx2 | _mm_i32gather_epi64 | function | ||
| 2395 | core::core_arch::x86::avx2 | _mm_i32gather_pd | function | ||
| 2396 | core::core_arch::x86::avx2 | _mm_i32gather_ps | function | ||
| 2397 | core::core_arch::x86::avx2 | _mm_i64gather_epi32 | function | ||
| 2398 | core::core_arch::x86::avx2 | _mm_i64gather_epi64 | function | ||
| 2399 | core::core_arch::x86::avx2 | _mm_i64gather_pd | function | ||
| 2400 | core::core_arch::x86::avx2 | _mm_i64gather_ps | function | ||
| 2401 | core::core_arch::x86::avx2 | _mm_mask_i32gather_epi32 | function | ||
| 2402 | core::core_arch::x86::avx2 | _mm_mask_i32gather_epi64 | function | ||
| 2403 | core::core_arch::x86::avx2 | _mm_mask_i32gather_pd | function | ||
| 2404 | core::core_arch::x86::avx2 | _mm_mask_i32gather_ps | function | ||
| 2405 | core::core_arch::x86::avx2 | _mm_mask_i64gather_epi32 | function | ||
| 2406 | core::core_arch::x86::avx2 | _mm_mask_i64gather_epi64 | function | ||
| 2407 | core::core_arch::x86::avx2 | _mm_mask_i64gather_pd | function | ||
| 2408 | core::core_arch::x86::avx2 | _mm_mask_i64gather_ps | function | ||
| 2409 | core::core_arch::x86::avx2 | _mm_maskload_epi32 | function | ||
| 2410 | core::core_arch::x86::avx2 | _mm_maskload_epi64 | function | ||
| 2411 | core::core_arch::x86::avx2 | _mm_maskstore_epi32 | function | ||
| 2412 | core::core_arch::x86::avx2 | _mm_maskstore_epi64 | function | ||
| 2413 | core::core_arch::x86::avx512bw | _kortest_mask32_u8 | function | ||
| 2414 | core::core_arch::x86::avx512bw | _kortest_mask64_u8 | function | ||
| 2415 | core::core_arch::x86::avx512bw | _ktest_mask32_u8 | function | ||
| 2416 | core::core_arch::x86::avx512bw | _ktest_mask64_u8 | function | ||
| 2417 | core::core_arch::x86::avx512bw | _load_mask32 | function | ||
| 2418 | core::core_arch::x86::avx512bw | _load_mask64 | function | ||
| 2419 | core::core_arch::x86::avx512bw | _mm256_loadu_epi16 | function | ||
| 2420 | core::core_arch::x86::avx512bw | _mm256_loadu_epi8 | function | ||
| 2421 | core::core_arch::x86::avx512bw | _mm256_mask_cvtepi16_storeu_epi8 | function | ||
| 2422 | core::core_arch::x86::avx512bw | _mm256_mask_cvtsepi16_storeu_epi8 | function | ||
| 2423 | core::core_arch::x86::avx512bw | _mm256_mask_cvtusepi16_storeu_epi8 | function | ||
| 2424 | core::core_arch::x86::avx512bw | _mm256_mask_loadu_epi16 | function | ||
| 2425 | core::core_arch::x86::avx512bw | _mm256_mask_loadu_epi8 | function | ||
| 2426 | core::core_arch::x86::avx512bw | _mm256_mask_storeu_epi16 | function | ||
| 2427 | core::core_arch::x86::avx512bw | _mm256_mask_storeu_epi8 | function | ||
| 2428 | core::core_arch::x86::avx512bw | _mm256_maskz_loadu_epi16 | function | ||
| 2429 | core::core_arch::x86::avx512bw | _mm256_maskz_loadu_epi8 | function | ||
| 2430 | core::core_arch::x86::avx512bw | _mm256_storeu_epi16 | function | ||
| 2431 | core::core_arch::x86::avx512bw | _mm256_storeu_epi8 | function | ||
| 2432 | core::core_arch::x86::avx512bw | _mm512_loadu_epi16 | function | ||
| 2433 | core::core_arch::x86::avx512bw | _mm512_loadu_epi8 | function | ||
| 2434 | core::core_arch::x86::avx512bw | _mm512_mask_cvtepi16_storeu_epi8 | function | ||
| 2435 | core::core_arch::x86::avx512bw | _mm512_mask_cvtsepi16_storeu_epi8 | function | ||
| 2436 | core::core_arch::x86::avx512bw | _mm512_mask_cvtusepi16_storeu_epi8 | function | ||
| 2437 | core::core_arch::x86::avx512bw | _mm512_mask_loadu_epi16 | function | ||
| 2438 | core::core_arch::x86::avx512bw | _mm512_mask_loadu_epi8 | function | ||
| 2439 | core::core_arch::x86::avx512bw | _mm512_mask_storeu_epi16 | function | ||
| 2440 | core::core_arch::x86::avx512bw | _mm512_mask_storeu_epi8 | function | ||
| 2441 | core::core_arch::x86::avx512bw | _mm512_maskz_loadu_epi16 | function | ||
| 2442 | core::core_arch::x86::avx512bw | _mm512_maskz_loadu_epi8 | function | ||
| 2443 | core::core_arch::x86::avx512bw | _mm512_storeu_epi16 | function | ||
| 2444 | core::core_arch::x86::avx512bw | _mm512_storeu_epi8 | function | ||
| 2445 | core::core_arch::x86::avx512bw | _mm_loadu_epi16 | function | ||
| 2446 | core::core_arch::x86::avx512bw | _mm_loadu_epi8 | function | ||
| 2447 | core::core_arch::x86::avx512bw | _mm_mask_cvtepi16_storeu_epi8 | function | ||
| 2448 | core::core_arch::x86::avx512bw | _mm_mask_cvtsepi16_storeu_epi8 | function | ||
| 2449 | core::core_arch::x86::avx512bw | _mm_mask_cvtusepi16_storeu_epi8 | function | ||
| 2450 | core::core_arch::x86::avx512bw | _mm_mask_loadu_epi16 | function | ||
| 2451 | core::core_arch::x86::avx512bw | _mm_mask_loadu_epi8 | function | ||
| 2452 | core::core_arch::x86::avx512bw | _mm_mask_storeu_epi16 | function | ||
| 2453 | core::core_arch::x86::avx512bw | _mm_mask_storeu_epi8 | function | ||
| 2454 | core::core_arch::x86::avx512bw | _mm_maskz_loadu_epi16 | function | ||
| 2455 | core::core_arch::x86::avx512bw | _mm_maskz_loadu_epi8 | function | ||
| 2456 | core::core_arch::x86::avx512bw | _mm_storeu_epi16 | function | ||
| 2457 | core::core_arch::x86::avx512bw | _mm_storeu_epi8 | function | ||
| 2458 | core::core_arch::x86::avx512bw | _store_mask32 | function | ||
| 2459 | core::core_arch::x86::avx512bw | _store_mask64 | function | ||
| 2460 | core::core_arch::x86::avx512dq | _kortest_mask8_u8 | function | ||
| 2461 | core::core_arch::x86::avx512dq | _ktest_mask16_u8 | function | ||
| 2462 | core::core_arch::x86::avx512dq | _ktest_mask8_u8 | function | ||
| 2463 | core::core_arch::x86::avx512dq | _load_mask8 | function | ||
| 2464 | core::core_arch::x86::avx512dq | _store_mask8 | function | ||
| 2465 | core::core_arch::x86::avx512f | _kortest_mask16_u8 | function | ||
| 2466 | core::core_arch::x86::avx512f | _load_mask16 | function | ||
| 2467 | core::core_arch::x86::avx512f | _mm256_i32scatter_epi32 | function | ||
| 2468 | core::core_arch::x86::avx512f | _mm256_i32scatter_epi64 | function | ||
| 2469 | core::core_arch::x86::avx512f | _mm256_i32scatter_pd | function | ||
| 2470 | core::core_arch::x86::avx512f | _mm256_i32scatter_ps | function | ||
| 2471 | core::core_arch::x86::avx512f | _mm256_i64scatter_epi32 | function | ||
| 2472 | core::core_arch::x86::avx512f | _mm256_i64scatter_epi64 | function | ||
| 2473 | core::core_arch::x86::avx512f | _mm256_i64scatter_pd | function | ||
| 2474 | core::core_arch::x86::avx512f | _mm256_i64scatter_ps | function | ||
| 2475 | core::core_arch::x86::avx512f | _mm256_load_epi32 | function | ||
| 2476 | core::core_arch::x86::avx512f | _mm256_load_epi64 | function | ||
| 2477 | core::core_arch::x86::avx512f | _mm256_loadu_epi32 | function | ||
| 2478 | core::core_arch::x86::avx512f | _mm256_loadu_epi64 | function | ||
| 2479 | core::core_arch::x86::avx512f | _mm256_mask_compressstoreu_epi32 | function | ||
| 2480 | core::core_arch::x86::avx512f | _mm256_mask_compressstoreu_epi64 | function | ||
| 2481 | core::core_arch::x86::avx512f | _mm256_mask_compressstoreu_pd | function | ||
| 2482 | core::core_arch::x86::avx512f | _mm256_mask_compressstoreu_ps | function | ||
| 2483 | core::core_arch::x86::avx512f | _mm256_mask_cvtepi32_storeu_epi16 | function | ||
| 2484 | core::core_arch::x86::avx512f | _mm256_mask_cvtepi32_storeu_epi8 | function | ||
| 2485 | core::core_arch::x86::avx512f | _mm256_mask_cvtepi64_storeu_epi16 | function | ||
| 2486 | core::core_arch::x86::avx512f | _mm256_mask_cvtepi64_storeu_epi32 | function | ||
| 2487 | core::core_arch::x86::avx512f | _mm256_mask_cvtepi64_storeu_epi8 | function | ||
| 2488 | core::core_arch::x86::avx512f | _mm256_mask_cvtsepi32_storeu_epi16 | function | ||
| 2489 | core::core_arch::x86::avx512f | _mm256_mask_cvtsepi32_storeu_epi8 | function | ||
| 2490 | core::core_arch::x86::avx512f | _mm256_mask_cvtsepi64_storeu_epi16 | function | ||
| 2491 | core::core_arch::x86::avx512f | _mm256_mask_cvtsepi64_storeu_epi32 | function | ||
| 2492 | core::core_arch::x86::avx512f | _mm256_mask_cvtsepi64_storeu_epi8 | function | ||
| 2493 | core::core_arch::x86::avx512f | _mm256_mask_cvtusepi32_storeu_epi16 | function | ||
| 2494 | core::core_arch::x86::avx512f | _mm256_mask_cvtusepi32_storeu_epi8 | function | ||
| 2495 | core::core_arch::x86::avx512f | _mm256_mask_cvtusepi64_storeu_epi16 | function | ||
| 2496 | core::core_arch::x86::avx512f | _mm256_mask_cvtusepi64_storeu_epi32 | function | ||
| 2497 | core::core_arch::x86::avx512f | _mm256_mask_cvtusepi64_storeu_epi8 | function | ||
| 2498 | core::core_arch::x86::avx512f | _mm256_mask_expandloadu_epi32 | function | ||
| 2499 | core::core_arch::x86::avx512f | _mm256_mask_expandloadu_epi64 | function | ||
| 2500 | core::core_arch::x86::avx512f | _mm256_mask_expandloadu_pd | function | ||
| 2501 | core::core_arch::x86::avx512f | _mm256_mask_expandloadu_ps | function | ||
| 2502 | core::core_arch::x86::avx512f | _mm256_mask_i32scatter_epi32 | function | ||
| 2503 | core::core_arch::x86::avx512f | _mm256_mask_i32scatter_epi64 | function | ||
| 2504 | core::core_arch::x86::avx512f | _mm256_mask_i32scatter_pd | function | ||
| 2505 | core::core_arch::x86::avx512f | _mm256_mask_i32scatter_ps | function | ||
| 2506 | core::core_arch::x86::avx512f | _mm256_mask_i64scatter_epi32 | function | ||
| 2507 | core::core_arch::x86::avx512f | _mm256_mask_i64scatter_epi64 | function | ||
| 2508 | core::core_arch::x86::avx512f | _mm256_mask_i64scatter_pd | function | ||
| 2509 | core::core_arch::x86::avx512f | _mm256_mask_i64scatter_ps | function | ||
| 2510 | core::core_arch::x86::avx512f | _mm256_mask_load_epi32 | function | ||
| 2511 | core::core_arch::x86::avx512f | _mm256_mask_load_epi64 | function | ||
| 2512 | core::core_arch::x86::avx512f | _mm256_mask_load_pd | function | ||
| 2513 | core::core_arch::x86::avx512f | _mm256_mask_load_ps | function | ||
| 2514 | core::core_arch::x86::avx512f | _mm256_mask_loadu_epi32 | function | ||
| 2515 | core::core_arch::x86::avx512f | _mm256_mask_loadu_epi64 | function | ||
| 2516 | core::core_arch::x86::avx512f | _mm256_mask_loadu_pd | function | ||
| 2517 | core::core_arch::x86::avx512f | _mm256_mask_loadu_ps | function | ||
| 2518 | core::core_arch::x86::avx512f | _mm256_mask_store_epi32 | function | ||
| 2519 | core::core_arch::x86::avx512f | _mm256_mask_store_epi64 | function | ||
| 2520 | core::core_arch::x86::avx512f | _mm256_mask_store_pd | function | ||
| 2521 | core::core_arch::x86::avx512f | _mm256_mask_store_ps | function | ||
| 2522 | core::core_arch::x86::avx512f | _mm256_mask_storeu_epi32 | function | ||
| 2523 | core::core_arch::x86::avx512f | _mm256_mask_storeu_epi64 | function | ||
| 2524 | core::core_arch::x86::avx512f | _mm256_mask_storeu_pd | function | ||
| 2525 | core::core_arch::x86::avx512f | _mm256_mask_storeu_ps | function | ||
| 2526 | core::core_arch::x86::avx512f | _mm256_maskz_expandloadu_epi32 | function | ||
| 2527 | core::core_arch::x86::avx512f | _mm256_maskz_expandloadu_epi64 | function | ||
| 2528 | core::core_arch::x86::avx512f | _mm256_maskz_expandloadu_pd | function | ||
| 2529 | core::core_arch::x86::avx512f | _mm256_maskz_expandloadu_ps | function | ||
| 2530 | core::core_arch::x86::avx512f | _mm256_maskz_load_epi32 | function | ||
| 2531 | core::core_arch::x86::avx512f | _mm256_maskz_load_epi64 | function | ||
| 2532 | core::core_arch::x86::avx512f | _mm256_maskz_load_pd | function | ||
| 2533 | core::core_arch::x86::avx512f | _mm256_maskz_load_ps | function | ||
| 2534 | core::core_arch::x86::avx512f | _mm256_maskz_loadu_epi32 | function | ||
| 2535 | core::core_arch::x86::avx512f | _mm256_maskz_loadu_epi64 | function | ||
| 2536 | core::core_arch::x86::avx512f | _mm256_maskz_loadu_pd | function | ||
| 2537 | core::core_arch::x86::avx512f | _mm256_maskz_loadu_ps | function | ||
| 2538 | core::core_arch::x86::avx512f | _mm256_mmask_i32gather_epi32 | function | ||
| 2539 | core::core_arch::x86::avx512f | _mm256_mmask_i32gather_epi64 | function | ||
| 2540 | core::core_arch::x86::avx512f | _mm256_mmask_i32gather_pd | function | ||
| 2541 | core::core_arch::x86::avx512f | _mm256_mmask_i32gather_ps | function | ||
| 2542 | core::core_arch::x86::avx512f | _mm256_mmask_i64gather_epi32 | function | ||
| 2543 | core::core_arch::x86::avx512f | _mm256_mmask_i64gather_epi64 | function | ||
| 2544 | core::core_arch::x86::avx512f | _mm256_mmask_i64gather_pd | function | ||
| 2545 | core::core_arch::x86::avx512f | _mm256_mmask_i64gather_ps | function | ||
| 2546 | core::core_arch::x86::avx512f | _mm256_store_epi32 | function | ||
| 2547 | core::core_arch::x86::avx512f | _mm256_store_epi64 | function | ||
| 2548 | core::core_arch::x86::avx512f | _mm256_storeu_epi32 | function | ||
| 2549 | core::core_arch::x86::avx512f | _mm256_storeu_epi64 | function | ||
| 2550 | core::core_arch::x86::avx512f | _mm512_i32gather_epi32 | function | ||
| 2551 | core::core_arch::x86::avx512f | _mm512_i32gather_epi64 | function | ||
| 2552 | core::core_arch::x86::avx512f | _mm512_i32gather_pd | function | ||
| 2553 | core::core_arch::x86::avx512f | _mm512_i32gather_ps | function | ||
| 2554 | core::core_arch::x86::avx512f | _mm512_i32logather_epi64 | function | ||
| 2555 | core::core_arch::x86::avx512f | _mm512_i32logather_pd | function | ||
| 2556 | core::core_arch::x86::avx512f | _mm512_i32loscatter_epi64 | function | ||
| 2557 | core::core_arch::x86::avx512f | _mm512_i32loscatter_pd | function | ||
| 2558 | core::core_arch::x86::avx512f | _mm512_i32scatter_epi32 | function | ||
| 2559 | core::core_arch::x86::avx512f | _mm512_i32scatter_epi64 | function | ||
| 2560 | core::core_arch::x86::avx512f | _mm512_i32scatter_pd | function | ||
| 2561 | core::core_arch::x86::avx512f | _mm512_i32scatter_ps | function | ||
| 2562 | core::core_arch::x86::avx512f | _mm512_i64gather_epi32 | function | ||
| 2563 | core::core_arch::x86::avx512f | _mm512_i64gather_epi64 | function | ||
| 2564 | core::core_arch::x86::avx512f | _mm512_i64gather_pd | function | ||
| 2565 | core::core_arch::x86::avx512f | _mm512_i64gather_ps | function | ||
| 2566 | core::core_arch::x86::avx512f | _mm512_i64scatter_epi32 | function | ||
| 2567 | core::core_arch::x86::avx512f | _mm512_i64scatter_epi64 | function | ||
| 2568 | core::core_arch::x86::avx512f | _mm512_i64scatter_pd | function | ||
| 2569 | core::core_arch::x86::avx512f | _mm512_i64scatter_ps | function | ||
| 2570 | core::core_arch::x86::avx512f | _mm512_load_epi32 | function | ||
| 2571 | core::core_arch::x86::avx512f | _mm512_load_epi64 | function | ||
| 2572 | core::core_arch::x86::avx512f | _mm512_load_pd | function | ||
| 2573 | core::core_arch::x86::avx512f | _mm512_load_ps | function | ||
| 2574 | core::core_arch::x86::avx512f | _mm512_load_si512 | function | ||
| 2575 | core::core_arch::x86::avx512f | _mm512_loadu_epi32 | function | ||
| 2576 | core::core_arch::x86::avx512f | _mm512_loadu_epi64 | function | ||
| 2577 | core::core_arch::x86::avx512f | _mm512_loadu_pd | function | ||
| 2578 | core::core_arch::x86::avx512f | _mm512_loadu_ps | function | ||
| 2579 | core::core_arch::x86::avx512f | _mm512_loadu_si512 | function | ||
| 2580 | core::core_arch::x86::avx512f | _mm512_mask_compressstoreu_epi32 | function | ||
| 2581 | core::core_arch::x86::avx512f | _mm512_mask_compressstoreu_epi64 | function | ||
| 2582 | core::core_arch::x86::avx512f | _mm512_mask_compressstoreu_pd | function | ||
| 2583 | core::core_arch::x86::avx512f | _mm512_mask_compressstoreu_ps | function | ||
| 2584 | core::core_arch::x86::avx512f | _mm512_mask_cvtepi32_storeu_epi16 | function | ||
| 2585 | core::core_arch::x86::avx512f | _mm512_mask_cvtepi32_storeu_epi8 | function | ||
| 2586 | core::core_arch::x86::avx512f | _mm512_mask_cvtepi64_storeu_epi16 | function | ||
| 2587 | core::core_arch::x86::avx512f | _mm512_mask_cvtepi64_storeu_epi32 | function | ||
| 2588 | core::core_arch::x86::avx512f | _mm512_mask_cvtepi64_storeu_epi8 | function | ||
| 2589 | core::core_arch::x86::avx512f | _mm512_mask_cvtsepi32_storeu_epi16 | function | ||
| 2590 | core::core_arch::x86::avx512f | _mm512_mask_cvtsepi32_storeu_epi8 | function | ||
| 2591 | core::core_arch::x86::avx512f | _mm512_mask_cvtsepi64_storeu_epi16 | function | ||
| 2592 | core::core_arch::x86::avx512f | _mm512_mask_cvtsepi64_storeu_epi32 | function | ||
| 2593 | core::core_arch::x86::avx512f | _mm512_mask_cvtsepi64_storeu_epi8 | function | ||
| 2594 | core::core_arch::x86::avx512f | _mm512_mask_cvtusepi32_storeu_epi16 | function | ||
| 2595 | core::core_arch::x86::avx512f | _mm512_mask_cvtusepi32_storeu_epi8 | function | ||
| 2596 | core::core_arch::x86::avx512f | _mm512_mask_cvtusepi64_storeu_epi16 | function | ||
| 2597 | core::core_arch::x86::avx512f | _mm512_mask_cvtusepi64_storeu_epi32 | function | ||
| 2598 | core::core_arch::x86::avx512f | _mm512_mask_cvtusepi64_storeu_epi8 | function | ||
| 2599 | core::core_arch::x86::avx512f | _mm512_mask_expandloadu_epi32 | function | ||
| 2600 | core::core_arch::x86::avx512f | _mm512_mask_expandloadu_epi64 | function | ||
| 2601 | core::core_arch::x86::avx512f | _mm512_mask_expandloadu_pd | function | ||
| 2602 | core::core_arch::x86::avx512f | _mm512_mask_expandloadu_ps | function | ||
| 2603 | core::core_arch::x86::avx512f | _mm512_mask_i32gather_epi32 | function | ||
| 2604 | core::core_arch::x86::avx512f | _mm512_mask_i32gather_epi64 | function | ||
| 2605 | core::core_arch::x86::avx512f | _mm512_mask_i32gather_pd | function | ||
| 2606 | core::core_arch::x86::avx512f | _mm512_mask_i32gather_ps | function | ||
| 2607 | core::core_arch::x86::avx512f | _mm512_mask_i32logather_epi64 | function | ||
| 2608 | core::core_arch::x86::avx512f | _mm512_mask_i32logather_pd | function | ||
| 2609 | core::core_arch::x86::avx512f | _mm512_mask_i32loscatter_epi64 | function | ||
| 2610 | core::core_arch::x86::avx512f | _mm512_mask_i32loscatter_pd | function | ||
| 2611 | core::core_arch::x86::avx512f | _mm512_mask_i32scatter_epi32 | function | ||
| 2612 | core::core_arch::x86::avx512f | _mm512_mask_i32scatter_epi64 | function | ||
| 2613 | core::core_arch::x86::avx512f | _mm512_mask_i32scatter_pd | function | ||
| 2614 | core::core_arch::x86::avx512f | _mm512_mask_i32scatter_ps | function | ||
| 2615 | core::core_arch::x86::avx512f | _mm512_mask_i64gather_epi32 | function | ||
| 2616 | core::core_arch::x86::avx512f | _mm512_mask_i64gather_epi64 | function | ||
| 2617 | core::core_arch::x86::avx512f | _mm512_mask_i64gather_pd | function | ||
| 2618 | core::core_arch::x86::avx512f | _mm512_mask_i64gather_ps | function | ||
| 2619 | core::core_arch::x86::avx512f | _mm512_mask_i64scatter_epi32 | function | ||
| 2620 | core::core_arch::x86::avx512f | _mm512_mask_i64scatter_epi64 | function | ||
| 2621 | core::core_arch::x86::avx512f | _mm512_mask_i64scatter_pd | function | ||
| 2622 | core::core_arch::x86::avx512f | _mm512_mask_i64scatter_ps | function | ||
| 2623 | core::core_arch::x86::avx512f | _mm512_mask_load_epi32 | function | ||
| 2624 | core::core_arch::x86::avx512f | _mm512_mask_load_epi64 | function | ||
| 2625 | core::core_arch::x86::avx512f | _mm512_mask_load_pd | function | ||
| 2626 | core::core_arch::x86::avx512f | _mm512_mask_load_ps | function | ||
| 2627 | core::core_arch::x86::avx512f | _mm512_mask_loadu_epi32 | function | ||
| 2628 | core::core_arch::x86::avx512f | _mm512_mask_loadu_epi64 | function | ||
| 2629 | core::core_arch::x86::avx512f | _mm512_mask_loadu_pd | function | ||
| 2630 | core::core_arch::x86::avx512f | _mm512_mask_loadu_ps | function | ||
| 2631 | core::core_arch::x86::avx512f | _mm512_mask_store_epi32 | function | ||
| 2632 | core::core_arch::x86::avx512f | _mm512_mask_store_epi64 | function | ||
| 2633 | core::core_arch::x86::avx512f | _mm512_mask_store_pd | function | ||
| 2634 | core::core_arch::x86::avx512f | _mm512_mask_store_ps | function | ||
| 2635 | core::core_arch::x86::avx512f | _mm512_mask_storeu_epi32 | function | ||
| 2636 | core::core_arch::x86::avx512f | _mm512_mask_storeu_epi64 | function | ||
| 2637 | core::core_arch::x86::avx512f | _mm512_mask_storeu_pd | function | ||
| 2638 | core::core_arch::x86::avx512f | _mm512_mask_storeu_ps | function | ||
| 2639 | core::core_arch::x86::avx512f | _mm512_maskz_expandloadu_epi32 | function | ||
| 2640 | core::core_arch::x86::avx512f | _mm512_maskz_expandloadu_epi64 | function | ||
| 2641 | core::core_arch::x86::avx512f | _mm512_maskz_expandloadu_pd | function | ||
| 2642 | core::core_arch::x86::avx512f | _mm512_maskz_expandloadu_ps | function | ||
| 2643 | core::core_arch::x86::avx512f | _mm512_maskz_load_epi32 | function | ||
| 2644 | core::core_arch::x86::avx512f | _mm512_maskz_load_epi64 | function | ||
| 2645 | core::core_arch::x86::avx512f | _mm512_maskz_load_pd | function | ||
| 2646 | core::core_arch::x86::avx512f | _mm512_maskz_load_ps | function | ||
| 2647 | core::core_arch::x86::avx512f | _mm512_maskz_loadu_epi32 | function | ||
| 2648 | core::core_arch::x86::avx512f | _mm512_maskz_loadu_epi64 | function | ||
| 2649 | core::core_arch::x86::avx512f | _mm512_maskz_loadu_pd | function | ||
| 2650 | core::core_arch::x86::avx512f | _mm512_maskz_loadu_ps | function | ||
| 2651 | core::core_arch::x86::avx512f | _mm512_store_epi32 | function | ||
| 2652 | core::core_arch::x86::avx512f | _mm512_store_epi64 | function | ||
| 2653 | core::core_arch::x86::avx512f | _mm512_store_pd | function | ||
| 2654 | core::core_arch::x86::avx512f | _mm512_store_ps | function | ||
| 2655 | core::core_arch::x86::avx512f | _mm512_store_si512 | function | ||
| 2656 | core::core_arch::x86::avx512f | _mm512_storeu_epi32 | function | ||
| 2657 | core::core_arch::x86::avx512f | _mm512_storeu_epi64 | function | ||
| 2658 | core::core_arch::x86::avx512f | _mm512_storeu_pd | function | ||
| 2659 | core::core_arch::x86::avx512f | _mm512_storeu_ps | function | ||
| 2660 | core::core_arch::x86::avx512f | _mm512_storeu_si512 | function | ||
| 2661 | core::core_arch::x86::avx512f | _mm512_stream_load_si512 | function | ||
| 2662 | core::core_arch::x86::avx512f | _mm512_stream_pd | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2663 | core::core_arch::x86::avx512f | _mm512_stream_ps | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2664 | core::core_arch::x86::avx512f | _mm512_stream_si512 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2665 | core::core_arch::x86::avx512f | _mm_i32scatter_epi32 | function | ||
| 2666 | core::core_arch::x86::avx512f | _mm_i32scatter_epi64 | function | ||
| 2667 | core::core_arch::x86::avx512f | _mm_i32scatter_pd | function | ||
| 2668 | core::core_arch::x86::avx512f | _mm_i32scatter_ps | function | ||
| 2669 | core::core_arch::x86::avx512f | _mm_i64scatter_epi32 | function | ||
| 2670 | core::core_arch::x86::avx512f | _mm_i64scatter_epi64 | function | ||
| 2671 | core::core_arch::x86::avx512f | _mm_i64scatter_pd | function | ||
| 2672 | core::core_arch::x86::avx512f | _mm_i64scatter_ps | function | ||
| 2673 | core::core_arch::x86::avx512f | _mm_load_epi32 | function | ||
| 2674 | core::core_arch::x86::avx512f | _mm_load_epi64 | function | ||
| 2675 | core::core_arch::x86::avx512f | _mm_loadu_epi32 | function | ||
| 2676 | core::core_arch::x86::avx512f | _mm_loadu_epi64 | function | ||
| 2677 | core::core_arch::x86::avx512f | _mm_mask_compressstoreu_epi32 | function | ||
| 2678 | core::core_arch::x86::avx512f | _mm_mask_compressstoreu_epi64 | function | ||
| 2679 | core::core_arch::x86::avx512f | _mm_mask_compressstoreu_pd | function | ||
| 2680 | core::core_arch::x86::avx512f | _mm_mask_compressstoreu_ps | function | ||
| 2681 | core::core_arch::x86::avx512f | _mm_mask_cvtepi32_storeu_epi16 | function | ||
| 2682 | core::core_arch::x86::avx512f | _mm_mask_cvtepi32_storeu_epi8 | function | ||
| 2683 | core::core_arch::x86::avx512f | _mm_mask_cvtepi64_storeu_epi16 | function | ||
| 2684 | core::core_arch::x86::avx512f | _mm_mask_cvtepi64_storeu_epi32 | function | ||
| 2685 | core::core_arch::x86::avx512f | _mm_mask_cvtepi64_storeu_epi8 | function | ||
| 2686 | core::core_arch::x86::avx512f | _mm_mask_cvtsepi32_storeu_epi16 | function | ||
| 2687 | core::core_arch::x86::avx512f | _mm_mask_cvtsepi32_storeu_epi8 | function | ||
| 2688 | core::core_arch::x86::avx512f | _mm_mask_cvtsepi64_storeu_epi16 | function | ||
| 2689 | core::core_arch::x86::avx512f | _mm_mask_cvtsepi64_storeu_epi32 | function | ||
| 2690 | core::core_arch::x86::avx512f | _mm_mask_cvtsepi64_storeu_epi8 | function | ||
| 2691 | core::core_arch::x86::avx512f | _mm_mask_cvtusepi32_storeu_epi16 | function | ||
| 2692 | core::core_arch::x86::avx512f | _mm_mask_cvtusepi32_storeu_epi8 | function | ||
| 2693 | core::core_arch::x86::avx512f | _mm_mask_cvtusepi64_storeu_epi16 | function | ||
| 2694 | core::core_arch::x86::avx512f | _mm_mask_cvtusepi64_storeu_epi32 | function | ||
| 2695 | core::core_arch::x86::avx512f | _mm_mask_cvtusepi64_storeu_epi8 | function | ||
| 2696 | core::core_arch::x86::avx512f | _mm_mask_expandloadu_epi32 | function | ||
| 2697 | core::core_arch::x86::avx512f | _mm_mask_expandloadu_epi64 | function | ||
| 2698 | core::core_arch::x86::avx512f | _mm_mask_expandloadu_pd | function | ||
| 2699 | core::core_arch::x86::avx512f | _mm_mask_expandloadu_ps | function | ||
| 2700 | core::core_arch::x86::avx512f | _mm_mask_i32scatter_epi32 | function | ||
| 2701 | core::core_arch::x86::avx512f | _mm_mask_i32scatter_epi64 | function | ||
| 2702 | core::core_arch::x86::avx512f | _mm_mask_i32scatter_pd | function | ||
| 2703 | core::core_arch::x86::avx512f | _mm_mask_i32scatter_ps | function | ||
| 2704 | core::core_arch::x86::avx512f | _mm_mask_i64scatter_epi32 | function | ||
| 2705 | core::core_arch::x86::avx512f | _mm_mask_i64scatter_epi64 | function | ||
| 2706 | core::core_arch::x86::avx512f | _mm_mask_i64scatter_pd | function | ||
| 2707 | core::core_arch::x86::avx512f | _mm_mask_i64scatter_ps | function | ||
| 2708 | core::core_arch::x86::avx512f | _mm_mask_load_epi32 | function | ||
| 2709 | core::core_arch::x86::avx512f | _mm_mask_load_epi64 | function | ||
| 2710 | core::core_arch::x86::avx512f | _mm_mask_load_pd | function | ||
| 2711 | core::core_arch::x86::avx512f | _mm_mask_load_ps | function | ||
| 2712 | core::core_arch::x86::avx512f | _mm_mask_load_sd | function | ||
| 2713 | core::core_arch::x86::avx512f | _mm_mask_load_ss | function | ||
| 2714 | core::core_arch::x86::avx512f | _mm_mask_loadu_epi32 | function | ||
| 2715 | core::core_arch::x86::avx512f | _mm_mask_loadu_epi64 | function | ||
| 2716 | core::core_arch::x86::avx512f | _mm_mask_loadu_pd | function | ||
| 2717 | core::core_arch::x86::avx512f | _mm_mask_loadu_ps | function | ||
| 2718 | core::core_arch::x86::avx512f | _mm_mask_store_epi32 | function | ||
| 2719 | core::core_arch::x86::avx512f | _mm_mask_store_epi64 | function | ||
| 2720 | core::core_arch::x86::avx512f | _mm_mask_store_pd | function | ||
| 2721 | core::core_arch::x86::avx512f | _mm_mask_store_ps | function | ||
| 2722 | core::core_arch::x86::avx512f | _mm_mask_store_sd | function | ||
| 2723 | core::core_arch::x86::avx512f | _mm_mask_store_ss | function | ||
| 2724 | core::core_arch::x86::avx512f | _mm_mask_storeu_epi32 | function | ||
| 2725 | core::core_arch::x86::avx512f | _mm_mask_storeu_epi64 | function | ||
| 2726 | core::core_arch::x86::avx512f | _mm_mask_storeu_pd | function | ||
| 2727 | core::core_arch::x86::avx512f | _mm_mask_storeu_ps | function | ||
| 2728 | core::core_arch::x86::avx512f | _mm_maskz_expandloadu_epi32 | function | ||
| 2729 | core::core_arch::x86::avx512f | _mm_maskz_expandloadu_epi64 | function | ||
| 2730 | core::core_arch::x86::avx512f | _mm_maskz_expandloadu_pd | function | ||
| 2731 | core::core_arch::x86::avx512f | _mm_maskz_expandloadu_ps | function | ||
| 2732 | core::core_arch::x86::avx512f | _mm_maskz_load_epi32 | function | ||
| 2733 | core::core_arch::x86::avx512f | _mm_maskz_load_epi64 | function | ||
| 2734 | core::core_arch::x86::avx512f | _mm_maskz_load_pd | function | ||
| 2735 | core::core_arch::x86::avx512f | _mm_maskz_load_ps | function | ||
| 2736 | core::core_arch::x86::avx512f | _mm_maskz_load_sd | function | ||
| 2737 | core::core_arch::x86::avx512f | _mm_maskz_load_ss | function | ||
| 2738 | core::core_arch::x86::avx512f | _mm_maskz_loadu_epi32 | function | ||
| 2739 | core::core_arch::x86::avx512f | _mm_maskz_loadu_epi64 | function | ||
| 2740 | core::core_arch::x86::avx512f | _mm_maskz_loadu_pd | function | ||
| 2741 | core::core_arch::x86::avx512f | _mm_maskz_loadu_ps | function | ||
| 2742 | core::core_arch::x86::avx512f | _mm_mmask_i32gather_epi32 | function | ||
| 2743 | core::core_arch::x86::avx512f | _mm_mmask_i32gather_epi64 | function | ||
| 2744 | core::core_arch::x86::avx512f | _mm_mmask_i32gather_pd | function | ||
| 2745 | core::core_arch::x86::avx512f | _mm_mmask_i32gather_ps | function | ||
| 2746 | core::core_arch::x86::avx512f | _mm_mmask_i64gather_epi32 | function | ||
| 2747 | core::core_arch::x86::avx512f | _mm_mmask_i64gather_epi64 | function | ||
| 2748 | core::core_arch::x86::avx512f | _mm_mmask_i64gather_pd | function | ||
| 2749 | core::core_arch::x86::avx512f | _mm_mmask_i64gather_ps | function | ||
| 2750 | core::core_arch::x86::avx512f | _mm_store_epi32 | function | ||
| 2751 | core::core_arch::x86::avx512f | _mm_store_epi64 | function | ||
| 2752 | core::core_arch::x86::avx512f | _mm_storeu_epi32 | function | ||
| 2753 | core::core_arch::x86::avx512f | _mm_storeu_epi64 | function | ||
| 2754 | core::core_arch::x86::avx512f | _store_mask16 | function | ||
| 2755 | core::core_arch::x86::avx512fp16 | _mm256_load_ph | function | ||
| 2756 | core::core_arch::x86::avx512fp16 | _mm256_loadu_ph | function | ||
| 2757 | core::core_arch::x86::avx512fp16 | _mm256_store_ph | function | ||
| 2758 | core::core_arch::x86::avx512fp16 | _mm256_storeu_ph | function | ||
| 2759 | core::core_arch::x86::avx512fp16 | _mm512_load_ph | function | ||
| 2760 | core::core_arch::x86::avx512fp16 | _mm512_loadu_ph | function | ||
| 2761 | core::core_arch::x86::avx512fp16 | _mm512_store_ph | function | ||
| 2762 | core::core_arch::x86::avx512fp16 | _mm512_storeu_ph | function | ||
| 2763 | core::core_arch::x86::avx512fp16 | _mm_load_ph | function | ||
| 2764 | core::core_arch::x86::avx512fp16 | _mm_load_sh | function | ||
| 2765 | core::core_arch::x86::avx512fp16 | _mm_loadu_ph | function | ||
| 2766 | core::core_arch::x86::avx512fp16 | _mm_mask_load_sh | function | ||
| 2767 | core::core_arch::x86::avx512fp16 | _mm_mask_store_sh | function | ||
| 2768 | core::core_arch::x86::avx512fp16 | _mm_maskz_load_sh | function | ||
| 2769 | core::core_arch::x86::avx512fp16 | _mm_store_ph | function | ||
| 2770 | core::core_arch::x86::avx512fp16 | _mm_store_sh | function | ||
| 2771 | core::core_arch::x86::avx512fp16 | _mm_storeu_ph | function | ||
| 2772 | core::core_arch::x86::avx512vbmi2 | _mm256_mask_compressstoreu_epi16 | function | ||
| 2773 | core::core_arch::x86::avx512vbmi2 | _mm256_mask_compressstoreu_epi8 | function | ||
| 2774 | core::core_arch::x86::avx512vbmi2 | _mm256_mask_expandloadu_epi16 | function | ||
| 2775 | core::core_arch::x86::avx512vbmi2 | _mm256_mask_expandloadu_epi8 | function | ||
| 2776 | core::core_arch::x86::avx512vbmi2 | _mm256_maskz_expandloadu_epi16 | function | ||
| 2777 | core::core_arch::x86::avx512vbmi2 | _mm256_maskz_expandloadu_epi8 | function | ||
| 2778 | core::core_arch::x86::avx512vbmi2 | _mm512_mask_compressstoreu_epi16 | function | ||
| 2779 | core::core_arch::x86::avx512vbmi2 | _mm512_mask_compressstoreu_epi8 | function | ||
| 2780 | core::core_arch::x86::avx512vbmi2 | _mm512_mask_expandloadu_epi16 | function | ||
| 2781 | core::core_arch::x86::avx512vbmi2 | _mm512_mask_expandloadu_epi8 | function | ||
| 2782 | core::core_arch::x86::avx512vbmi2 | _mm512_maskz_expandloadu_epi16 | function | ||
| 2783 | core::core_arch::x86::avx512vbmi2 | _mm512_maskz_expandloadu_epi8 | function | ||
| 2784 | core::core_arch::x86::avx512vbmi2 | _mm_mask_compressstoreu_epi16 | function | ||
| 2785 | core::core_arch::x86::avx512vbmi2 | _mm_mask_compressstoreu_epi8 | function | ||
| 2786 | core::core_arch::x86::avx512vbmi2 | _mm_mask_expandloadu_epi16 | function | ||
| 2787 | core::core_arch::x86::avx512vbmi2 | _mm_mask_expandloadu_epi8 | function | ||
| 2788 | core::core_arch::x86::avx512vbmi2 | _mm_maskz_expandloadu_epi16 | function | ||
| 2789 | core::core_arch::x86::avx512vbmi2 | _mm_maskz_expandloadu_epi8 | function | ||
| 2790 | core::core_arch::x86::avxneconvert | _mm256_bcstnebf16_ps | function | ||
| 2791 | core::core_arch::x86::avxneconvert | _mm256_bcstnesh_ps | function | ||
| 2792 | core::core_arch::x86::avxneconvert | _mm256_cvtneebf16_ps | function | ||
| 2793 | core::core_arch::x86::avxneconvert | _mm256_cvtneeph_ps | function | ||
| 2794 | core::core_arch::x86::avxneconvert | _mm256_cvtneobf16_ps | function | ||
| 2795 | core::core_arch::x86::avxneconvert | _mm256_cvtneoph_ps | function | ||
| 2796 | core::core_arch::x86::avxneconvert | _mm_bcstnebf16_ps | function | ||
| 2797 | core::core_arch::x86::avxneconvert | _mm_bcstnesh_ps | function | ||
| 2798 | core::core_arch::x86::avxneconvert | _mm_cvtneebf16_ps | function | ||
| 2799 | core::core_arch::x86::avxneconvert | _mm_cvtneeph_ps | function | ||
| 2800 | core::core_arch::x86::avxneconvert | _mm_cvtneobf16_ps | function | ||
| 2801 | core::core_arch::x86::avxneconvert | _mm_cvtneoph_ps | function | ||
| 2802 | core::core_arch::x86::bt | _bittest | function | ||
| 2803 | core::core_arch::x86::bt | _bittestandcomplement | function | ||
| 2804 | core::core_arch::x86::bt | _bittestandreset | function | ||
| 2805 | core::core_arch::x86::bt | _bittestandset | function | ||
| 2806 | core::core_arch::x86::fxsr | _fxrstor | function | ||
| 2807 | core::core_arch::x86::fxsr | _fxsave | function | ||
| 2808 | core::core_arch::x86::kl | _mm_aesdec128kl_u8 | function | ||
| 2809 | core::core_arch::x86::kl | _mm_aesdec256kl_u8 | function | ||
| 2810 | core::core_arch::x86::kl | _mm_aesdecwide128kl_u8 | function | ||
| 2811 | core::core_arch::x86::kl | _mm_aesdecwide256kl_u8 | function | ||
| 2812 | core::core_arch::x86::kl | _mm_aesenc128kl_u8 | function | ||
| 2813 | core::core_arch::x86::kl | _mm_aesenc256kl_u8 | function | ||
| 2814 | core::core_arch::x86::kl | _mm_aesencwide128kl_u8 | function | ||
| 2815 | core::core_arch::x86::kl | _mm_aesencwide256kl_u8 | function | ||
| 2816 | core::core_arch::x86::kl | _mm_encodekey128_u32 | function | ||
| 2817 | core::core_arch::x86::kl | _mm_encodekey256_u32 | function | ||
| 2818 | core::core_arch::x86::kl | _mm_loadiwkey | function | ||
| 2819 | core::core_arch::x86::rdtsc | __rdtscp | function | ||
| 2820 | core::core_arch::x86::rdtsc | _rdtsc | function | ||
| 2821 | core::core_arch::x86::rtm | _xabort | function | ||
| 2822 | core::core_arch::x86::rtm | _xbegin | function | ||
| 2823 | core::core_arch::x86::rtm | _xend | function | ||
| 2824 | core::core_arch::x86::rtm | _xtest | function | ||
| 2825 | core::core_arch::x86::sse | _MM_GET_EXCEPTION_MASK | function | ||
| 2826 | core::core_arch::x86::sse | _MM_GET_EXCEPTION_STATE | function | ||
| 2827 | core::core_arch::x86::sse | _MM_GET_FLUSH_ZERO_MODE | function | ||
| 2828 | core::core_arch::x86::sse | _MM_GET_ROUNDING_MODE | function | ||
| 2829 | core::core_arch::x86::sse | _MM_SET_EXCEPTION_MASK | function | ||
| 2830 | core::core_arch::x86::sse | _MM_SET_EXCEPTION_STATE | function | ||
| 2831 | core::core_arch::x86::sse | _MM_SET_FLUSH_ZERO_MODE | function | ||
| 2832 | core::core_arch::x86::sse | _MM_SET_ROUNDING_MODE | function | ||
| 2833 | core::core_arch::x86::sse | _mm_getcsr | function | ||
| 2834 | core::core_arch::x86::sse | _mm_load1_ps | function | ||
| 2835 | core::core_arch::x86::sse | _mm_load_ps | function | ||
| 2836 | core::core_arch::x86::sse | _mm_load_ps1 | function | ||
| 2837 | core::core_arch::x86::sse | _mm_load_ss | function | ||
| 2838 | core::core_arch::x86::sse | _mm_loadr_ps | function | ||
| 2839 | core::core_arch::x86::sse | _mm_loadu_ps | function | ||
| 2840 | core::core_arch::x86::sse | _mm_setcsr | function | ||
| 2841 | core::core_arch::x86::sse | _mm_store1_ps | function | ||
| 2842 | core::core_arch::x86::sse | _mm_store_ps | function | ||
| 2843 | core::core_arch::x86::sse | _mm_store_ps1 | function | ||
| 2844 | core::core_arch::x86::sse | _mm_store_ss | function | ||
| 2845 | core::core_arch::x86::sse | _mm_storer_ps | function | ||
| 2846 | core::core_arch::x86::sse | _mm_storeu_ps | function | ||
| 2847 | core::core_arch::x86::sse | _mm_stream_ps | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2848 | core::core_arch::x86::sse2 | _mm_clflush | function | ||
| 2849 | core::core_arch::x86::sse2 | _mm_load1_pd | function | ||
| 2850 | core::core_arch::x86::sse2 | _mm_load_pd | function | ||
| 2851 | core::core_arch::x86::sse2 | _mm_load_pd1 | function | ||
| 2852 | core::core_arch::x86::sse2 | _mm_load_sd | function | ||
| 2853 | core::core_arch::x86::sse2 | _mm_load_si128 | function | ||
| 2854 | core::core_arch::x86::sse2 | _mm_loadh_pd | function | ||
| 2855 | core::core_arch::x86::sse2 | _mm_loadl_epi64 | function | ||
| 2856 | core::core_arch::x86::sse2 | _mm_loadl_pd | function | ||
| 2857 | core::core_arch::x86::sse2 | _mm_loadr_pd | function | ||
| 2858 | core::core_arch::x86::sse2 | _mm_loadu_pd | function | ||
| 2859 | core::core_arch::x86::sse2 | _mm_loadu_si128 | function | ||
| 2860 | core::core_arch::x86::sse2 | _mm_loadu_si16 | function | ||
| 2861 | core::core_arch::x86::sse2 | _mm_loadu_si32 | function | ||
| 2862 | core::core_arch::x86::sse2 | _mm_loadu_si64 | function | ||
| 2863 | core::core_arch::x86::sse2 | _mm_maskmoveu_si128 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2864 | core::core_arch::x86::sse2 | _mm_store1_pd | function | ||
| 2865 | core::core_arch::x86::sse2 | _mm_store_pd | function | ||
| 2866 | core::core_arch::x86::sse2 | _mm_store_pd1 | function | ||
| 2867 | core::core_arch::x86::sse2 | _mm_store_sd | function | ||
| 2868 | core::core_arch::x86::sse2 | _mm_store_si128 | function | ||
| 2869 | core::core_arch::x86::sse2 | _mm_storeh_pd | function | ||
| 2870 | core::core_arch::x86::sse2 | _mm_storel_epi64 | function | ||
| 2871 | core::core_arch::x86::sse2 | _mm_storel_pd | function | ||
| 2872 | core::core_arch::x86::sse2 | _mm_storer_pd | function | ||
| 2873 | core::core_arch::x86::sse2 | _mm_storeu_pd | function | ||
| 2874 | core::core_arch::x86::sse2 | _mm_storeu_si128 | function | ||
| 2875 | core::core_arch::x86::sse2 | _mm_storeu_si16 | function | ||
| 2876 | core::core_arch::x86::sse2 | _mm_storeu_si32 | function | ||
| 2877 | core::core_arch::x86::sse2 | _mm_storeu_si64 | function | ||
| 2878 | core::core_arch::x86::sse2 | _mm_stream_pd | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2879 | core::core_arch::x86::sse2 | _mm_stream_si128 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2880 | core::core_arch::x86::sse2 | _mm_stream_si32 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2881 | core::core_arch::x86::sse3 | _mm_lddqu_si128 | function | ||
| 2882 | core::core_arch::x86::sse3 | _mm_loaddup_pd | function | ||
| 2883 | core::core_arch::x86::sse41 | _mm_stream_load_si128 | function | ||
| 2884 | core::core_arch::x86::sse4a | _mm_stream_sd | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2885 | core::core_arch::x86::sse4a | _mm_stream_ss | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2886 | core::core_arch::x86::xsave | _xgetbv | function | ||
| 2887 | core::core_arch::x86::xsave | _xrstor | function | ||
| 2888 | core::core_arch::x86::xsave | _xrstors | function | ||
| 2889 | core::core_arch::x86::xsave | _xsave | function | ||
| 2890 | core::core_arch::x86::xsave | _xsavec | function | ||
| 2891 | core::core_arch::x86::xsave | _xsaveopt | function | ||
| 2892 | core::core_arch::x86::xsave | _xsaves | function | ||
| 2893 | core::core_arch::x86::xsave | _xsetbv | function | ||
| 2894 | core::core_arch::x86_64::amx | _tile_cmmimfp16ps | function | ||
| 2895 | core::core_arch::x86_64::amx | _tile_cmmrlfp16ps | function | ||
| 2896 | core::core_arch::x86_64::amx | _tile_cvtrowd2ps | function | ||
| 2897 | core::core_arch::x86_64::amx | _tile_cvtrowps2phh | function | ||
| 2898 | core::core_arch::x86_64::amx | _tile_cvtrowps2phl | function | ||
| 2899 | core::core_arch::x86_64::amx | _tile_dpbf16ps | function | ||
| 2900 | core::core_arch::x86_64::amx | _tile_dpbf8ps | function | ||
| 2901 | core::core_arch::x86_64::amx | _tile_dpbhf8ps | function | ||
| 2902 | core::core_arch::x86_64::amx | _tile_dpbssd | function | ||
| 2903 | core::core_arch::x86_64::amx | _tile_dpbsud | function | ||
| 2904 | core::core_arch::x86_64::amx | _tile_dpbusd | function | ||
| 2905 | core::core_arch::x86_64::amx | _tile_dpbuud | function | ||
| 2906 | core::core_arch::x86_64::amx | _tile_dpfp16ps | function | ||
| 2907 | core::core_arch::x86_64::amx | _tile_dphbf8ps | function | ||
| 2908 | core::core_arch::x86_64::amx | _tile_dphf8ps | function | ||
| 2909 | core::core_arch::x86_64::amx | _tile_loadconfig | function | ||
| 2910 | core::core_arch::x86_64::amx | _tile_loadd | function | ||
| 2911 | core::core_arch::x86_64::amx | _tile_loaddrs | function | ||
| 2912 | core::core_arch::x86_64::amx | _tile_mmultf32ps | function | ||
| 2913 | core::core_arch::x86_64::amx | _tile_movrow | function | ||
| 2914 | core::core_arch::x86_64::amx | _tile_release | function | ||
| 2915 | core::core_arch::x86_64::amx | _tile_storeconfig | function | ||
| 2916 | core::core_arch::x86_64::amx | _tile_stored | function | ||
| 2917 | core::core_arch::x86_64::amx | _tile_stream_loadd | function | ||
| 2918 | core::core_arch::x86_64::amx | _tile_stream_loaddrs | function | ||
| 2919 | core::core_arch::x86_64::amx | _tile_zero | function | ||
| 2920 | core::core_arch::x86_64::bt | _bittest64 | function | ||
| 2921 | core::core_arch::x86_64::bt | _bittestandcomplement64 | function | ||
| 2922 | core::core_arch::x86_64::bt | _bittestandreset64 | function | ||
| 2923 | core::core_arch::x86_64::bt | _bittestandset64 | function | ||
| 2924 | core::core_arch::x86_64::cmpxchg16b | cmpxchg16b | function | ||
| 2925 | core::core_arch::x86_64::fxsr | _fxrstor64 | function | ||
| 2926 | core::core_arch::x86_64::fxsr | _fxsave64 | function | ||
| 2927 | core::core_arch::x86_64::sse2 | _mm_stream_si64 | function | After using this intrinsic, but before any other access to the memory that this intrinsic mutates, a call to [`_mm_sfence`] must be performed by the thread that used the intrinsic. In particular, functions that call this intrinsic should generally call `_mm_sfence` before they return. See [`_mm_sfence`] for details. | |
| 2928 | core::core_arch::x86_64::xsave | _xrstor64 | function | ||
| 2929 | core::core_arch::x86_64::xsave | _xrstors64 | function | ||
| 2930 | core::core_arch::x86_64::xsave | _xsave64 | function | ||
| 2931 | core::core_arch::x86_64::xsave | _xsavec64 | function | ||
| 2932 | core::core_arch::x86_64::xsave | _xsaveopt64 | function | ||
| 2933 | core::core_arch::x86_64::xsave | _xsaves64 | function | ||
| 2934 | core::core_simd::cast::sealed | Sealed | trait | Implementing this trait asserts that the type is a valid vector element for the `simd_cast` or `simd_as` intrinsics. | |
| 2935 | core::core_simd::masks | MaskElement | trait | Type must be a signed integer. | |
| 2936 | core::core_simd::masks::Mask | from_simd_unchecked | function | All elements must be either 0 or -1. | |
| 2937 | core::core_simd::masks::Mask | set_unchecked | function | `index` must be less than `self.len()`. | |
| 2938 | core::core_simd::masks::Mask | test_unchecked | function | `index` must be less than `self.len()`. | |
| 2939 | core::core_simd::vector | SimdElement | trait | This trait, when implemented, asserts the compiler can monomorphize `#[repr(simd)]` structs with the marked type as an element. Strictly, it is valid to impl if the vector will not be miscompiled. Practically, it is user-unfriendly to impl it if the vector won't compile, even when no soundness guarantees are broken by allowing the user to try. | |
| 2940 | core::core_simd::vector::Simd | gather_ptr | function | Each read must satisfy the same conditions as [`core::ptr::read`]. | |
| 2941 | core::core_simd::vector::Simd | gather_select_ptr | function | Enabled elements must satisfy the same conditions as [`core::ptr::read`]. | |
| 2942 | core::core_simd::vector::Simd | gather_select_unchecked | function | Calling this function with an `enable`d out-of-bounds index is *[undefined behavior]* even if the resulting value is not used. | |
| 2943 | core::core_simd::vector::Simd | load_select_ptr | function | Enabled `ptr` elements must be safe to read as if by `core::ptr::read`. | |
| 2944 | core::core_simd::vector::Simd | load_select_unchecked | function | Enabled loads must not exceed the length of `slice`. | |
| 2945 | core::core_simd::vector::Simd | scatter_ptr | function | Each write must satisfy the same conditions as [`core::ptr::write`]. | |
| 2946 | core::core_simd::vector::Simd | scatter_select_ptr | function | Enabled pointers must satisfy the same conditions as [`core::ptr::write`]. | |
| 2947 | core::core_simd::vector::Simd | scatter_select_unchecked | function | Calling this function with an enabled out-of-bounds index is *[undefined behavior]*, and may lead to memory corruption. | |
| 2948 | core::core_simd::vector::Simd | store_select_ptr | function | Memory addresses for element are calculated [`pointer::wrapping_offset`] and each enabled element must satisfy the same conditions as [`core::ptr::write`]. | |
| 2949 | core::core_simd::vector::Simd | store_select_unchecked | function | Every enabled element must be in bounds for the `slice`. | |
| 2950 | core::f128 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 2951 | core::f16 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 2952 | core::f32 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 2953 | core::f64 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 2954 | core::ffi::c_str::CStr | from_bytes_with_nul_unchecked | function | The provided slice **must** be nul-terminated and not contain any interior nul bytes. | |
| 2955 | core::ffi::c_str::CStr | from_ptr | function | * The memory pointed to by `ptr` must contain a valid nul terminator at the end of the string. * `ptr` must be [valid] for reads of bytes up to and including the nul terminator. This means in particular: * The entire memory range of this `CStr` must be contained within a single allocation! * `ptr` must be non-null even for a zero-length cstr. * The memory referenced by the returned `CStr` must not be mutated for the duration of lifetime `'a`. * The nul terminator must be within `isize::MAX` from `ptr` > **Note**: This operation is intended to be a 0-cost cast but it is > currently implemented with an up-front calculation of the length of > the string. This is not guaranteed to always be the case. | |
| 2956 | core::ffi::va_list | VaArgSafe | trait | The standard library implements this trait for primitive types that are expected to have a variable argument application-binary interface (ABI) on all platforms. When C passes variable arguments, integers smaller than [`c_int`] and floats smaller than [`c_double`] are implicitly promoted to [`c_int`] and [`c_double`] respectively. Implementing this trait for types that are subject to this promotion rule is invalid. [`c_int`]: core::ffi::c_int [`c_double`]: core::ffi::c_double | |
| 2957 | core::ffi::va_list::VaList | arg | function | This function is only sound to call when there is another argument to read, and that argument is a properly initialized value of the type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound. | |
| 2958 | core::field | Field | trait | Given a valid value of type `Self::Base`, there exists a valid value of type `Self::Type` at byte offset `OFFSET` | |
| 2959 | core::future::async_drop | async_drop_in_place | function | ||
| 2960 | core::hint | assert_unchecked | function | `cond` must be `true`. It is immediate UB to call this with `false`. | |
| 2961 | core::hint | unreachable_unchecked | function | Reaching this function is *Undefined Behavior*. As the compiler assumes that all forms of Undefined Behavior can never happen, it will eliminate all branches in the surrounding code that it can determine will invariably lead to a call to `unreachable_unchecked()`. If the assumptions embedded in using this function turn out to be wrong - that is, if the site which is calling `unreachable_unchecked()` is actually reachable at runtime - the compiler may have generated nonsensical machine instructions for this situation, including in seemingly unrelated code, causing difficult-to-debug problems. Use this function sparingly. Consider using the [`unreachable!`] macro, which may prevent some optimizations but will safely panic in case it is actually reached at runtime. Benchmark your code to find out if using `unreachable_unchecked()` comes with a performance benefit. | |
| 2962 | core::i128 | unchecked_add | function | This results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add | |
| 2963 | core::i128 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 2964 | core::i128 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul | |
| 2965 | core::i128 | unchecked_neg | function | This results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg | |
| 2966 | core::i128 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl | |
| 2967 | core::i128 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`. | |
| 2968 | core::i128 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr | |
| 2969 | core::i128 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`. | |
| 2970 | core::i128 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub | |
| 2971 | core::i16 | unchecked_add | function | This results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add | |
| 2972 | core::i16 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 2973 | core::i16 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul | |
| 2974 | core::i16 | unchecked_neg | function | This results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg | |
| 2975 | core::i16 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl | |
| 2976 | core::i16 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`. | |
| 2977 | core::i16 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr | |
| 2978 | core::i16 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`. | |
| 2979 | core::i16 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub | |
| 2980 | core::i32 | unchecked_add | function | This results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add | |
| 2981 | core::i32 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 2982 | core::i32 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul | |
| 2983 | core::i32 | unchecked_neg | function | This results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg | |
| 2984 | core::i32 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl | |
| 2985 | core::i32 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`. | |
| 2986 | core::i32 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr | |
| 2987 | core::i32 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`. | |
| 2988 | core::i32 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub | |
| 2989 | core::i64 | unchecked_add | function | This results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add | |
| 2990 | core::i64 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 2991 | core::i64 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul | |
| 2992 | core::i64 | unchecked_neg | function | This results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg | |
| 2993 | core::i64 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl | |
| 2994 | core::i64 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`. | |
| 2995 | core::i64 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr | |
| 2996 | core::i64 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`. | |
| 2997 | core::i64 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub | |
| 2998 | core::i8 | unchecked_add | function | This results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add | |
| 2999 | core::i8 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3000 | core::i8 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul | |
| 3001 | core::i8 | unchecked_neg | function | This results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg | |
| 3002 | core::i8 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl | |
| 3003 | core::i8 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`. | |
| 3004 | core::i8 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr | |
| 3005 | core::i8 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`. | |
| 3006 | core::i8 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub | |
| 3007 | core::intrinsics | align_of_val | function | See [`crate::mem::align_of_val_raw`] for safety conditions. | |
| 3008 | core::intrinsics | arith_offset | function | Unlike the `offset` intrinsic, this intrinsic does not restrict the resulting pointer to point into or at the end of an allocated object, and it wraps with two's complement arithmetic. The resulting value is not necessarily valid to be used to actually access memory. The stabilized version of this intrinsic is [`pointer::wrapping_offset`]. | |
| 3009 | core::intrinsics | assume | function | ||
| 3010 | core::intrinsics | atomic_and | function | ||
| 3011 | core::intrinsics | atomic_cxchg | function | ||
| 3012 | core::intrinsics | atomic_cxchgweak | function | ||
| 3013 | core::intrinsics | atomic_fence | function | ||
| 3014 | core::intrinsics | atomic_load | function | ||
| 3015 | core::intrinsics | atomic_max | function | ||
| 3016 | core::intrinsics | atomic_min | function | ||
| 3017 | core::intrinsics | atomic_nand | function | ||
| 3018 | core::intrinsics | atomic_or | function | ||
| 3019 | core::intrinsics | atomic_singlethreadfence | function | ||
| 3020 | core::intrinsics | atomic_store | function | ||
| 3021 | core::intrinsics | atomic_umax | function | ||
| 3022 | core::intrinsics | atomic_umin | function | ||
| 3023 | core::intrinsics | atomic_xadd | function | ||
| 3024 | core::intrinsics | atomic_xchg | function | ||
| 3025 | core::intrinsics | atomic_xor | function | ||
| 3026 | core::intrinsics | atomic_xsub | function | ||
| 3027 | core::intrinsics | catch_unwind | function | ||
| 3028 | core::intrinsics | compare_bytes | function | `left` and `right` must each be [valid] for reads of `bytes` bytes. Note that this applies to the whole range, not just until the first byte that differs. That allows optimizations that can read in large chunks. [valid]: crate::ptr#safety | |
| 3029 | core::intrinsics | const_allocate | function | - The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked. | |
| 3030 | core::intrinsics | const_deallocate | function | - The `align` argument must be a power of two. - At compile time, a compile error occurs if this constraint is violated. - At runtime, it is not checked. - If the `ptr` is created in an another const, this intrinsic doesn't deallocate it. - If the `ptr` is pointing to a local variable, this intrinsic doesn't deallocate it. | |
| 3031 | core::intrinsics | const_make_global | function | ||
| 3032 | core::intrinsics | copy | function | ||
| 3033 | core::intrinsics | copy_nonoverlapping | function | ||
| 3034 | core::intrinsics | ctlz_nonzero | function | ||
| 3035 | core::intrinsics | cttz_nonzero | function | ||
| 3036 | core::intrinsics | disjoint_bitor | function | Requires that `(a & b) == 0`, or equivalently that `(a | b) == (a + b)`. Otherwise it's immediate UB. | |
| 3037 | core::intrinsics | exact_div | function | ||
| 3038 | core::intrinsics | fadd_fast | function | ||
| 3039 | core::intrinsics | fdiv_fast | function | ||
| 3040 | core::intrinsics | float_to_int_unchecked | function | ||
| 3041 | core::intrinsics | fmul_fast | function | ||
| 3042 | core::intrinsics | frem_fast | function | ||
| 3043 | core::intrinsics | fsub_fast | function | ||
| 3044 | core::intrinsics | nontemporal_store | function | ||
| 3045 | core::intrinsics | offset | function | If the computed offset is non-zero, then both the starting and resulting pointer must be either in bounds or at the end of an allocation. If either pointer is out of bounds or arithmetic overflow occurs then this operation is undefined behavior. The stabilized version of this intrinsic is [`pointer::offset`]. | |
| 3046 | core::intrinsics | ptr_offset_from | function | ||
| 3047 | core::intrinsics | ptr_offset_from_unsigned | function | ||
| 3048 | core::intrinsics | raw_eq | function | It's UB to call this if any of the *bytes* in `*a` or `*b` are uninitialized. Note that this is a stricter criterion than just the *values* being fully-initialized: if `T` has padding, it's UB to call this intrinsic. At compile-time, it is furthermore UB to call this if any of the bytes in `*a` or `*b` have provenance. (The implementation is allowed to branch on the results of comparisons, which is UB if any of their inputs are `undef`.) | |
| 3049 | core::intrinsics | read_via_copy | function | ||
| 3050 | core::intrinsics | size_of_val | function | See [`crate::mem::size_of_val_raw`] for safety conditions. | |
| 3051 | core::intrinsics | slice_get_unchecked | function | - `index < PtrMetadata(slice_ptr)`, so the indexing is in-bounds for the slice - the resulting offsetting is in-bounds of the allocation, which is always the case for references, but needs to be upheld manually for pointers | |
| 3052 | core::intrinsics | transmute | function | ||
| 3053 | core::intrinsics | transmute_unchecked | function | ||
| 3054 | core::intrinsics | typed_swap_nonoverlapping | function | Behavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` must *not* overlap with the region of memory beginning at `y`. * The memory pointed by `x` and `y` must both contain values of type `T`. [valid]: crate::ptr#safety | |
| 3055 | core::intrinsics | unaligned_volatile_load | function | ||
| 3056 | core::intrinsics | unaligned_volatile_store | function | ||
| 3057 | core::intrinsics | unchecked_add | function | ||
| 3058 | core::intrinsics | unchecked_div | function | ||
| 3059 | core::intrinsics | unchecked_funnel_shl | function | ||
| 3060 | core::intrinsics | unchecked_funnel_shr | function | ||
| 3061 | core::intrinsics | unchecked_mul | function | ||
| 3062 | core::intrinsics | unchecked_rem | function | ||
| 3063 | core::intrinsics | unchecked_shl | function | ||
| 3064 | core::intrinsics | unchecked_shr | function | ||
| 3065 | core::intrinsics | unchecked_sub | function | ||
| 3066 | core::intrinsics | unreachable | function | ||
| 3067 | core::intrinsics | va_arg | function | This function is only sound to call when: - there is a next variable argument available. - the next argument's type must be ABI-compatible with the type `T`. - the next argument must have a properly initialized value of type `T`. Calling this function with an incompatible type, an invalid value, or when there are no more variable arguments, is unsound. | |
| 3068 | core::intrinsics | va_end | function | `ap` must not be used to access variable arguments after this call. | |
| 3069 | core::intrinsics | volatile_copy_memory | function | ||
| 3070 | core::intrinsics | volatile_copy_nonoverlapping_memory | function | The safety requirements are consistent with [`copy_nonoverlapping`] while the read and write behaviors are volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`copy_nonoverlapping`]: ptr::copy_nonoverlapping | |
| 3071 | core::intrinsics | volatile_load | function | ||
| 3072 | core::intrinsics | volatile_set_memory | function | The safety requirements are consistent with [`write_bytes`] while the write behavior is volatile, which means it will not be optimized out unless `_count` or `size_of::<T>()` is equal to zero. [`write_bytes`]: ptr::write_bytes | |
| 3073 | core::intrinsics | volatile_store | function | ||
| 3074 | core::intrinsics | vtable_align | function | `ptr` must point to a vtable. | |
| 3075 | core::intrinsics | vtable_size | function | `ptr` must point to a vtable. | |
| 3076 | core::intrinsics | write_bytes | function | ||
| 3077 | core::intrinsics | write_via_move | function | ||
| 3078 | core::intrinsics::bounds | BuiltinDeref | trait | Must actually *be* such a type. | |
| 3079 | core::intrinsics::simd | simd_add | function | ||
| 3080 | core::intrinsics::simd | simd_and | function | ||
| 3081 | core::intrinsics::simd | simd_arith_offset | function | ||
| 3082 | core::intrinsics::simd | simd_as | function | ||
| 3083 | core::intrinsics::simd | simd_bitmask | function | `x` must contain only `0` and `!0`. | |
| 3084 | core::intrinsics::simd | simd_bitreverse | function | ||
| 3085 | core::intrinsics::simd | simd_bswap | function | ||
| 3086 | core::intrinsics::simd | simd_carryless_mul | function | ||
| 3087 | core::intrinsics::simd | simd_cast | function | Casting from integer types is always safe. Casting between two float types is also always safe. Casting floats to integers truncates, following the same rules as `to_int_unchecked`. Specifically, each element must: * Not be `NaN` * Not be infinite * Be representable in the return type, after truncating off its fractional part | |
| 3088 | core::intrinsics::simd | simd_cast_ptr | function | ||
| 3089 | core::intrinsics::simd | simd_ceil | function | ||
| 3090 | core::intrinsics::simd | simd_ctlz | function | ||
| 3091 | core::intrinsics::simd | simd_ctpop | function | ||
| 3092 | core::intrinsics::simd | simd_cttz | function | ||
| 3093 | core::intrinsics::simd | simd_div | function | For integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior. | |
| 3094 | core::intrinsics::simd | simd_eq | function | ||
| 3095 | core::intrinsics::simd | simd_expose_provenance | function | ||
| 3096 | core::intrinsics::simd | simd_extract | function | `idx` must be const and in-bounds of the vector. | |
| 3097 | core::intrinsics::simd | simd_extract_dyn | function | `idx` must be in-bounds of the vector. | |
| 3098 | core::intrinsics::simd | simd_fabs | function | ||
| 3099 | core::intrinsics::simd | simd_fcos | function | ||
| 3100 | core::intrinsics::simd | simd_fexp | function | ||
| 3101 | core::intrinsics::simd | simd_fexp2 | function | ||
| 3102 | core::intrinsics::simd | simd_flog | function | ||
| 3103 | core::intrinsics::simd | simd_flog10 | function | ||
| 3104 | core::intrinsics::simd | simd_flog2 | function | ||
| 3105 | core::intrinsics::simd | simd_floor | function | ||
| 3106 | core::intrinsics::simd | simd_fma | function | ||
| 3107 | core::intrinsics::simd | simd_fmax | function | ||
| 3108 | core::intrinsics::simd | simd_fmin | function | ||
| 3109 | core::intrinsics::simd | simd_fsin | function | ||
| 3110 | core::intrinsics::simd | simd_fsqrt | function | ||
| 3111 | core::intrinsics::simd | simd_funnel_shl | function | Each element of `shift` must be less than `<int>::BITS`. | |
| 3112 | core::intrinsics::simd | simd_funnel_shr | function | Each element of `shift` must be less than `<int>::BITS`. | |
| 3113 | core::intrinsics::simd | simd_gather | function | Unmasked values in `T` must be readable as if by `<ptr>::read` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values. | |
| 3114 | core::intrinsics::simd | simd_ge | function | ||
| 3115 | core::intrinsics::simd | simd_gt | function | ||
| 3116 | core::intrinsics::simd | simd_insert | function | `idx` must be in-bounds of the vector. | |
| 3117 | core::intrinsics::simd | simd_insert_dyn | function | `idx` must be in-bounds of the vector. | |
| 3118 | core::intrinsics::simd | simd_le | function | ||
| 3119 | core::intrinsics::simd | simd_lt | function | ||
| 3120 | core::intrinsics::simd | simd_masked_load | function | `ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values. | |
| 3121 | core::intrinsics::simd | simd_masked_store | function | `ptr` must be aligned according to the `ALIGN` parameter, see [`SimdAlign`] for details. `mask` must only contain `0` or `!0` values. | |
| 3122 | core::intrinsics::simd | simd_mul | function | ||
| 3123 | core::intrinsics::simd | simd_ne | function | ||
| 3124 | core::intrinsics::simd | simd_neg | function | ||
| 3125 | core::intrinsics::simd | simd_or | function | ||
| 3126 | core::intrinsics::simd | simd_reduce_add_ordered | function | ||
| 3127 | core::intrinsics::simd | simd_reduce_add_unordered | function | ||
| 3128 | core::intrinsics::simd | simd_reduce_all | function | `x` must contain only `0` or `!0`. | |
| 3129 | core::intrinsics::simd | simd_reduce_and | function | ||
| 3130 | core::intrinsics::simd | simd_reduce_any | function | `x` must contain only `0` or `!0`. | |
| 3131 | core::intrinsics::simd | simd_reduce_max | function | ||
| 3132 | core::intrinsics::simd | simd_reduce_min | function | ||
| 3133 | core::intrinsics::simd | simd_reduce_mul_ordered | function | ||
| 3134 | core::intrinsics::simd | simd_reduce_mul_unordered | function | ||
| 3135 | core::intrinsics::simd | simd_reduce_or | function | ||
| 3136 | core::intrinsics::simd | simd_reduce_xor | function | ||
| 3137 | core::intrinsics::simd | simd_relaxed_fma | function | ||
| 3138 | core::intrinsics::simd | simd_rem | function | For integers, `rhs` must not contain any zero elements. Additionally for signed integers, `<int>::MIN / -1` is undefined behavior. | |
| 3139 | core::intrinsics::simd | simd_round | function | ||
| 3140 | core::intrinsics::simd | simd_round_ties_even | function | ||
| 3141 | core::intrinsics::simd | simd_saturating_add | function | ||
| 3142 | core::intrinsics::simd | simd_saturating_sub | function | ||
| 3143 | core::intrinsics::simd | simd_scatter | function | Unmasked values in `T` must be writeable as if by `<ptr>::write` (e.g. aligned to the element type). `mask` must only contain `0` or `!0` values. | |
| 3144 | core::intrinsics::simd | simd_select | function | `mask` must only contain `0` and `!0`. | |
| 3145 | core::intrinsics::simd | simd_select_bitmask | function | ||
| 3146 | core::intrinsics::simd | simd_shl | function | Each element of `rhs` must be less than `<int>::BITS`. | |
| 3147 | core::intrinsics::simd | simd_shr | function | Each element of `rhs` must be less than `<int>::BITS`. | |
| 3148 | core::intrinsics::simd | simd_shuffle | function | ||
| 3149 | core::intrinsics::simd | simd_splat | function | ||
| 3150 | core::intrinsics::simd | simd_sub | function | ||
| 3151 | core::intrinsics::simd | simd_trunc | function | ||
| 3152 | core::intrinsics::simd | simd_with_exposed_provenance | function | ||
| 3153 | core::intrinsics::simd | simd_xor | function | ||
| 3154 | core::io::borrowed_buf::BorrowedBuf | set_init | function | The caller must ensure that the first `n` unfilled bytes of the buffer have already been initialized. | |
| 3155 | core::io::borrowed_buf::BorrowedCursor | advance_unchecked | function | The caller must ensure that the first `n` bytes of the cursor have been properly initialised. | |
| 3156 | core::io::borrowed_buf::BorrowedCursor | as_mut | function | The caller must not uninitialize any bytes in the initialized portion of the cursor. | |
| 3157 | core::io::borrowed_buf::BorrowedCursor | set_init | function | The caller must ensure that the first `n` bytes of the buffer have already been initialized. | |
| 3158 | core::isize | unchecked_add | function | This results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add | |
| 3159 | core::isize | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3160 | core::isize | unchecked_mul | function | This results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul | |
| 3161 | core::isize | unchecked_neg | function | This results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg | |
| 3162 | core::isize | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl | |
| 3163 | core::isize | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`. | |
| 3164 | core::isize | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr | |
| 3165 | core::isize | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`. | |
| 3166 | core::isize | unchecked_sub | function | This results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub | |
| 3167 | core::iter::traits::marker | TrustedLen | trait | This trait must only be implemented when the contract is upheld. Consumers of this trait must inspect [`Iterator::size_hint()`]’s upper bound. | |
| 3168 | core::iter::traits::marker | TrustedStep | trait | The implementation of [`Step`] for the given type must guarantee all invariants of all methods are upheld. See the [`Step`] trait's documentation for details. Consumers are free to rely on the invariants in unsafe code. | |
| 3169 | core::marker | Freeze | trait | This trait is a core part of the language, it is just expressed as a trait in libcore for convenience. Do *not* implement it for other types. | |
| 3170 | core::marker | Send | trait | ||
| 3171 | core::marker | Sync | trait | ||
| 3172 | core::marker | UnsafeUnpin | trait | ||
| 3173 | core::mem | align_of_val_raw | function | This function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`align_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html | |
| 3174 | core::mem | conjure_zst | function | - `T` must be *[inhabited]*, i.e. possible to construct. This means that types like zero-variant enums and [`!`] are unsound to conjure. - You must use the value only in ways which do not violate any *safety* invariants of the type. While it's easy to create a *valid* instance of an inhabited ZST, since having no bits in its representation means there's only one possible value, that doesn't mean that it's always *sound* to do so. For example, a library could design zero-sized tokens that are `!Default + !Clone`, limiting their creation to functions that initialize some state or establish a scope. Conjuring such a token could break invariants and lead to unsoundness. | |
| 3175 | core::mem | size_of_val_raw | function | This function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`size_of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [`size_of::<T>()`]: size_of [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html | |
| 3176 | core::mem | transmute_copy | function | ||
| 3177 | core::mem | uninitialized | function | ||
| 3178 | core::mem | zeroed | function | ||
| 3179 | core::mem::manually_drop::ManuallyDrop | drop | function | This function runs the destructor of the contained value. Other than changes made by the destructor itself, the memory is left unchanged, and so as far as the compiler is concerned still holds a bit-pattern which is valid for the type `T`. However, this "zombie" value should not be exposed to safe code, and this function should not be called more than once. To use a value after it's been dropped, or drop a value multiple times, can cause Undefined Behavior (depending on what `drop` does). This is normally prevented by the type system, but users of `ManuallyDrop` must uphold those guarantees without assistance from the compiler. [pinned]: crate::pin | |
| 3180 | core::mem::manually_drop::ManuallyDrop | take | function | This function semantically moves out the contained value without preventing further usage, leaving the state of this container unchanged. It is your responsibility to ensure that this `ManuallyDrop` is not used again. | |
| 3181 | core::mem::maybe_uninit::MaybeUninit | array_assume_init | function | It is up to the caller to guarantee that all elements of the array are in an initialized state. | |
| 3182 | core::mem::maybe_uninit::MaybeUninit | assume_init | function | It is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes immediate undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. [inv]: #initialization-invariant On top of that, remember that most types have additional invariants beyond merely being considered initialized at the type level. For example, a `1`-initialized [`Vec<T>`] is considered initialized (under the current implementation; this does not constitute a stable guarantee) because the only requirement the compiler knows about it is that the data pointer must be non-null. Creating such a `Vec<T>` does not cause *immediate* undefined behavior, but will cause undefined behavior with most safe operations (including dropping it). [`Vec<T>`]: ../../std/vec/struct.Vec.html | |
| 3183 | core::mem::maybe_uninit::MaybeUninit | assume_init_drop | function | It is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behavior. [`assume_init`]: MaybeUninit::assume_init | |
| 3184 | core::mem::maybe_uninit::MaybeUninit | assume_init_mut | function | Calling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit`. | |
| 3185 | core::mem::maybe_uninit::MaybeUninit | assume_init_read | function | It is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. The [type-level documentation][inv] contains more information about this initialization invariant. Moreover, similar to the [`ptr::read`] function, this function creates a bitwise copy of the contents, regardless whether the contained type implements the [`Copy`] trait or not. When using multiple copies of the data (by calling `assume_init_read` multiple times, or first calling `assume_init_read` and then [`assume_init`]), it is your responsibility to ensure that data may indeed be duplicated. [inv]: #initialization-invariant [`assume_init`]: MaybeUninit::assume_init | |
| 3186 | core::mem::maybe_uninit::MaybeUninit | assume_init_ref | function | Calling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that the `MaybeUninit<T>` really is in an initialized state. | |
| 3187 | core::mem::transmutability | TransmuteFrom | trait | If `Dst: TransmuteFrom<Src, ASSUMPTIONS>`, the compiler guarantees that `Src` is soundly *union-transmutable* into a value of type `Dst`, provided that the programmer has guaranteed that the given [`ASSUMPTIONS`](Assume) are satisfied. A union-transmute is any bit-reinterpretation conversion in the form of: ```rust pub unsafe fn transmute_via_union<Src, Dst>(src: Src) -> Dst { use core::mem::ManuallyDrop; #[repr(C)] union Transmute<Src, Dst> { src: ManuallyDrop<Src>, dst: ManuallyDrop<Dst>, } let transmute = Transmute { src: ManuallyDrop::new(src), }; let dst = unsafe { transmute.dst }; ManuallyDrop::into_inner(dst) } ``` Note that this construction is more permissive than [`mem::transmute_copy`](super::transmute_copy); union-transmutes permit conversions that extend the bits of `Src` with trailing padding to fill trailing uninitialized bytes of `Self`; e.g.: ```rust #![feature(transmutability)] use core::mem::{Assume, TransmuteFrom}; let src = 42u8; // size = 1 #[repr(C, align(2))] struct Dst(u8); // size = 2 let _ = unsafe { <Dst as TransmuteFrom<u8, { Assume::SAFETY }>>::transmute(src) }; ``` | |
| 3188 | core::num::nonzero | ZeroablePrimitive | trait | Types implementing this trait must be primitives that are valid when zeroed. The associated `Self::NonZeroInner` type must have the same size+align as `Self`, but with a niche and bit validity making it so the following `transmutes` are sound: - `Self::NonZeroInner` to `Option<Self::NonZeroInner>` - `Option<Self::NonZeroInner>` to `Self` (And, consequently, `Self::NonZeroInner` to `Self`.) | |
| 3189 | core::num::nonzero::NonZero | from_mut_unchecked | function | The referenced value must not be zero. | |
| 3190 | core::num::nonzero::NonZero | new_unchecked | function | The value must not be zero. | |
| 3191 | core::num::nonzero::NonZero | unchecked_add | function | ||
| 3192 | core::num::nonzero::NonZero | unchecked_mul | function | ||
| 3193 | core::ops::deref | DerefPure | trait | ||
| 3194 | core::option::Option | unwrap_unchecked | function | Calling this method on [`None`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3195 | core::pin | PinCoerceUnsized | trait | If this type implements `Deref`, then the concrete type returned by `deref` and `deref_mut` must not change without a modification. The following operations are not considered modifications: * Moving the pointer. * Performing unsizing coercions on the pointer. * Performing dynamic dispatch with the pointer. * Calling `deref` or `deref_mut` on the pointer. The concrete type of a trait object is the type that the vtable corresponds to. The concrete type of a slice is an array of the same element type and the length specified in the metadata. The concrete type of a sized type is the type itself. | |
| 3196 | core::pin::Pin | get_unchecked_mut | function | This function is unsafe. You must guarantee that you will never move the data out of the mutable reference you receive when you call this function, so that the invariants on the `Pin` type can be upheld. If the underlying data is `Unpin`, `Pin::get_mut` should be used instead. | |
| 3197 | core::pin::Pin | into_inner_unchecked | function | This function is unsafe. You must guarantee that you will continue to treat the pointer `Ptr` as pinned after you call this function, so that the invariants on the `Pin` type can be upheld. If the code using the resulting `Ptr` does not continue to maintain the pinning invariants that is a violation of the API contract and may lead to undefined behavior in later (safe) operations. Note that you must be able to guarantee that the data pointed to by `Ptr` will be treated as pinned all the way until its `drop` handler is complete! *For more information, see the [`pin` module docs][self]* If the underlying data is [`Unpin`], [`Pin::into_inner`] should be used instead. | |
| 3198 | core::pin::Pin | map_unchecked | function | This function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning | |
| 3199 | core::pin::Pin | map_unchecked_mut | function | This function is unsafe. You must guarantee that the data you return will not move so long as the argument value does not move (for example, because it is one of the fields of that value), and also that you do not move out of the argument you receive to the interior function. [`pin` module]: self#projections-and-structural-pinning | |
| 3200 | core::pin::Pin | new_unchecked | function | This constructor is unsafe because we cannot guarantee that the data pointed to by `pointer` is pinned. At its core, pinning a value means making the guarantee that the value's data will not be moved nor have its storage invalidated until it gets dropped. For a more thorough explanation of pinning, see the [`pin` module docs]. If the caller that is constructing this `Pin<Ptr>` does not ensure that the data `Ptr` points to is pinned, that is a violation of the API contract and may lead to undefined behavior in later (even safe) operations. By using this method, you are also making a promise about the [`Deref`], [`DerefMut`], and [`Drop`] implementations of `Ptr`, if they exist. Most importantly, they must not move out of their `self` arguments: `Pin::as_mut` and `Pin::as_ref` will call `DerefMut::deref_mut` and `Deref::deref` *on the pointer type `Ptr`* and expect these methods to uphold the pinning invariants. Moreover, by calling this method you promise that the reference `Ptr` dereferences to will not be moved out of again; in particular, it must not be possible to obtain a `&mut Ptr::Target` and then move out of that reference (using, for example [`mem::swap`]). For example, calling `Pin::new_unchecked` on an `&'a mut T` is unsafe because while you are able to pin it for the given lifetime `'a`, you have no control over whether it is kept pinned once `'a` ends, and therefore cannot uphold the guarantee that a value, once pinned, remains pinned until it is dropped: ``` use std::mem; use std::pin::Pin; fn move_pinned_ref<T>(mut a: T, mut b: T) { unsafe { let p: Pin<&mut T> = Pin::new_unchecked(&mut a); // This should mean the pointee `a` can never move again. } mem::swap(&mut a, &mut b); // Potential UB down the road ⚠️ // The address of `a` changed to `b`'s stack slot, so `a` got moved even // though we have previously pinned it! We have violated the pinning API contract. } ``` A value, once pinned, must remain pinned until it is dropped (unless its type implements `Unpin`). Because `Pin<&mut T>` does not own the value, dropping the `Pin` will not drop the value and will not end the pinning contract. So moving the value after dropping the `Pin<&mut T>` is still a violation of the API contract. Similarly, calling `Pin::new_unchecked` on an `Rc<T>` is unsafe because there could be aliases to the same data that are not subject to the pinning restrictions: ``` use std::rc::Rc; use std::pin::Pin; fn move_pinned_rc<T>(mut x: Rc<T>) { // This should mean the pointee can never move again. let pin = unsafe { Pin::new_unchecked(Rc::clone(&x)) }; { let p: Pin<&T> = pin.as_ref(); // ... } drop(pin); let content = Rc::get_mut(&mut x).unwrap(); // Potential UB down the road ⚠️ // Now, if `x` was the only reference, we have a mutable reference to // data that we pinned above, which we could use to move it as we have // seen in the previous example. We have violated the pinning API contract. } ``` | |
| 3201 | core::pointer | add | function | If any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_add`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_add`]: #method.wrapping_add [allocation]: crate::ptr#allocation | |
| 3202 | core::pointer | as_mut | function | When calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3203 | core::pointer | as_mut_unchecked | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3204 | core::pointer | as_ref | function | When calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3205 | core::pointer | as_ref_unchecked | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3206 | core::pointer | as_uninit_mut | function | When calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3207 | core::pointer | as_uninit_ref | function | When calling this method, you have to ensure that *either* the pointer is null *or* the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory. | |
| 3208 | core::pointer | as_uninit_slice | function | When calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation | |
| 3209 | core::pointer | as_uninit_slice_mut | function | When calling this method, you have to ensure that *either* the pointer is null *or* all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single [allocation]! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`][]. [valid]: crate::ptr#safety [allocation]: crate::ptr#allocation | |
| 3210 | core::pointer | byte_add | function | ||
| 3211 | core::pointer | byte_offset | function | ||
| 3212 | core::pointer | byte_offset_from | function | ||
| 3213 | core::pointer | byte_offset_from_unsigned | function | ||
| 3214 | core::pointer | byte_sub | function | ||
| 3215 | core::pointer | copy_from | function | ||
| 3216 | core::pointer | copy_from_nonoverlapping | function | ||
| 3217 | core::pointer | copy_to | function | ||
| 3218 | core::pointer | copy_to_nonoverlapping | function | ||
| 3219 | core::pointer | drop_in_place | function | ||
| 3220 | core::pointer | get_unchecked | function | ||
| 3221 | core::pointer | get_unchecked_mut | function | ||
| 3222 | core::pointer | offset | function | If any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Note that "range" here refers to a half-open range as usual in Rust, i.e., `self..result` for non-negative offsets and `result..self` for negative offsets. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_offset`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_offset`]: #method.wrapping_offset [allocation]: crate::ptr#allocation | |
| 3223 | core::pointer | offset_from | function | If any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be [derived from][crate::ptr#provenance] a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation | |
| 3224 | core::pointer | offset_from_unsigned | function | - The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`. | |
| 3225 | core::pointer | read | function | ||
| 3226 | core::pointer | read_unaligned | function | ||
| 3227 | core::pointer | read_volatile | function | ||
| 3228 | core::pointer | replace | function | ||
| 3229 | core::pointer | split_at_mut | function | `mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. Since `len` being in-bounds is not a safety invariant of `*mut [T]` the safety requirements of this method are the same as for [`split_at_mut_unchecked`]. The explicit bounds check is only as useful as `len` is correct. [`split_at_mut_unchecked`]: #method.split_at_mut_unchecked [in-bounds]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3230 | core::pointer | split_at_mut_unchecked | function | `mid` must be [in-bounds] of the underlying [allocation]. Which means `self` must be dereferenceable and span a single allocation that is at least `mid * size_of::<T>()` bytes long. Not upholding these requirements is *[undefined behavior]* even if the resulting pointers are not used. [in-bounds]: #method.add [out-of-bounds index]: #method.add [allocation]: crate::ptr#allocation [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3231 | core::pointer | sub | function | If any of the following conditions are violated, the result is Undefined Behavior: * The offset in bytes, `count * size_of::<T>()`, computed on mathematical integers (without "wrapping around"), must fit in an `isize`. * If the computed offset is non-zero, then `self` must be [derived from][crate::ptr#provenance] a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. Consider using [`wrapping_sub`] instead if these constraints are difficult to satisfy. The only advantage of this method is that it enables more aggressive compiler optimizations. [`wrapping_sub`]: #method.wrapping_sub [allocation]: crate::ptr#allocation | |
| 3232 | core::pointer | swap | function | ||
| 3233 | core::pointer | write | function | ||
| 3234 | core::pointer | write_bytes | function | ||
| 3235 | core::pointer | write_unaligned | function | ||
| 3236 | core::pointer | write_volatile | function | ||
| 3237 | core::ptr | copy | function | Behavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0, and `dst` must remain valid even when `src` is read for `count * size_of::<T>()` bytes. (This means if the memory ranges overlap, the `dst` pointer must not be invalidated by `src` reads.) * Both `src` and `dst` must be properly aligned. Like [`read`], `copy` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety | |
| 3238 | core::ptr | copy_nonoverlapping | function | Behavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads of `count * size_of::<T>()` bytes or that number must be 0. * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes or that number must be 0. * Both `src` and `dst` must be properly aligned. * The region of memory beginning at `src` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `dst` with the same size. Like [`read`], `copy_nonoverlapping` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using *both* the values in the region beginning at `*src` and the region beginning at `*dst` can [violate memory safety][read-ownership]. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [`read`]: crate::ptr::read [read-ownership]: crate::ptr::read#ownership-of-the-returned-value [valid]: crate::ptr#safety | |
| 3239 | core::ptr | drop_in_place | function | Behavior is undefined if any of the following conditions are violated: * `to_drop` must be [valid] for both reads and writes. * `to_drop` must be properly aligned, even if `T` has size 0. * `to_drop` must be nonnull, even if `T` has size 0. * The value `to_drop` points to must be valid for dropping, which may mean it must uphold additional invariants. These invariants depend on the type of the value being dropped. For instance, when dropping a Box, the box's pointer to the heap must be valid. * While `drop_in_place` is executing, the only way to access parts of `to_drop` is through the `&mut self` references supplied to the `Drop::drop` methods that `drop_in_place` invokes. Additionally, if `T` is not [`Copy`], using the pointed-to value after calling `drop_in_place` can cause undefined behavior. Note that `*to_drop = foo` counts as a use because it will cause the value to be dropped again. [`write()`] can be used to overwrite data without causing it to be dropped. [valid]: self#safety | |
| 3240 | core::ptr | read | function | Behavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads or `T` must be a ZST. * `src` must be properly aligned. Use [`read_unaligned`] if this is not the case. * `src` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. | |
| 3241 | core::ptr | read_unaligned | function | Behavior is undefined if any of the following conditions are violated: * `src` must be [valid] for reads. * `src` must point to a properly initialized value of type `T`. Like [`read`], `read_unaligned` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. [read-ownership]: read#ownership-of-the-returned-value [valid]: self#safety | |
| 3242 | core::ptr | read_volatile | function | Like [`read`], `read_volatile` creates a bitwise copy of `T`, regardless of whether `T` is [`Copy`]. If `T` is not [`Copy`], using both the returned value and the value at `*src` can [violate memory safety][read-ownership]. However, storing non-[`Copy`] types in volatile memory is almost certainly incorrect. Behavior is undefined if any of the following conditions are violated: * `src` must be either [valid] for reads, or `T` must be a ZST, or `src` must point to memory outside of all Rust allocations and reading from that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `src` must be properly aligned. * Reading from `src` must produce a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety [read-ownership]: read#ownership-of-the-returned-value | |
| 3243 | core::ptr | replace | function | Behavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for both reads and writes or `T` must be a ZST. * `dst` must be properly aligned. * `dst` must point to a properly initialized value of type `T`. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety | |
| 3244 | core::ptr | swap | function | Behavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes. They must remain valid even when the other pointer is written. (This means if the memory ranges overlap, the two pointers must not be subject to aliasing restrictions relative to each other.) * Both `x` and `y` must be properly aligned. Note that even if `T` has size `0`, the pointers must be properly aligned. [valid]: self#safety | |
| 3245 | core::ptr | swap_nonoverlapping | function | Behavior is undefined if any of the following conditions are violated: * Both `x` and `y` must be [valid] for both reads and writes of `count * size_of::<T>()` bytes. * Both `x` and `y` must be properly aligned. * The region of memory beginning at `x` with a size of `count * size_of::<T>()` bytes must *not* overlap with the region of memory beginning at `y` with the same size. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointers must be properly aligned. [valid]: self#safety | |
| 3246 | core::ptr | write | function | Behavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes or `T` must be a ZST. * `dst` must be properly aligned. Use [`write_unaligned`] if this is not the case. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety | |
| 3247 | core::ptr | write_bytes | function | Behavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes of `count * size_of::<T>()` bytes. * `dst` must be properly aligned. Note that even if the effectively copied size (`count * size_of::<T>()`) is `0`, the pointer must be properly aligned. Additionally, note that changing `*dst` in this way can easily lead to undefined behavior (UB) later if the written bytes are not a valid representation of some `T`. For instance, the following is an **incorrect** use of this function: ```rust,no_run unsafe { let mut value: u8 = 0; let ptr: *mut bool = &mut value as *mut u8 as *mut bool; let _bool = ptr.read(); // This is fine, `ptr` points to a valid `bool`. ptr.write_bytes(42u8, 1); // This function itself does not cause UB... let _bool = ptr.read(); // ...but it makes this operation UB! ⚠️ } ``` [valid]: crate::ptr#safety | |
| 3248 | core::ptr | write_unaligned | function | Behavior is undefined if any of the following conditions are violated: * `dst` must be [valid] for writes. [valid]: self#safety | |
| 3249 | core::ptr | write_volatile | function | Behavior is undefined if any of the following conditions are violated: * `dst` must be either [valid] for writes, or `T` must be a ZST, or `dst` must point to memory outside of all Rust allocations and writing to that memory must: - not trap, and - not cause any memory inside a Rust allocation to be modified. * `dst` must be properly aligned. Note that even if `T` has size `0`, the pointer must be properly aligned. [valid]: self#safety | |
| 3250 | core::ptr::alignment::Alignment | new_unchecked | function | `align` must be a power of two. Equivalently, it must be `1 << exp` for some `exp` in `0..usize::BITS`. It must *not* be zero. | |
| 3251 | core::ptr::alignment::Alignment | of_val_raw | function | This function is only safe to call if the following conditions hold: - If `T` is `Sized`, this function is always safe to call. - If the unsized tail of `T` is: - a [slice], then the length of the slice tail must be an initialized integer, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. For the special case where the dynamic tail length is 0, this function is safe to call. - a [trait object], then the vtable part of the pointer must point to a valid vtable acquired by an unsizing coercion, and the size of the *entire value* (dynamic tail length + statically sized prefix) must fit in `isize`. - an (unstable) [extern type], then this function is always safe to call, but may panic or otherwise return the wrong value, as the extern type's layout is not known. This is the same behavior as [`Alignment::of_val`] on a reference to a type with an extern type tail. - otherwise, it is conservatively not allowed to call this function. [trait object]: ../../book/ch17-02-trait-objects.html [extern type]: ../../unstable-book/language-features/extern-types.html | |
| 3252 | core::ptr::non_null::NonNull | add | function | If any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation | |
| 3253 | core::ptr::non_null::NonNull | as_mut | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3254 | core::ptr::non_null::NonNull | as_ref | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). | |
| 3255 | core::ptr::non_null::NonNull | as_uninit_mut | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory. | |
| 3256 | core::ptr::non_null::NonNull | as_uninit_ref | function | When calling this method, you have to ensure that the pointer is [convertible to a reference](crate::ptr#pointer-to-reference-conversion). Note that because the created reference is to `MaybeUninit<T>`, the source pointer can point to uninitialized memory. | |
| 3257 | core::ptr::non_null::NonNull | as_uninit_slice | function | When calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get mutated (except inside `UnsafeCell`). This applies even if the result of this method is unused! See also [`slice::from_raw_parts`]. [valid]: crate::ptr#safety | |
| 3258 | core::ptr::non_null::NonNull | as_uninit_slice_mut | function | When calling this method, you have to ensure that all of the following is true: * The pointer must be [valid] for reads and writes for `ptr.len() * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The pointer must be aligned even for zero-length slices. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * The total size `ptr.len() * size_of::<T>()` of the slice must be no larger than `isize::MAX`. See the safety documentation of [`pointer::offset`]. * You must enforce Rust's aliasing rules, since the returned lifetime `'a` is arbitrarily chosen and does not necessarily reflect the actual lifetime of the data. In particular, while this reference exists, the memory the pointer points to must not get accessed (read or written) through any other pointer. This applies even if the result of this method is unused! See also [`slice::from_raw_parts_mut`]. [valid]: crate::ptr#safety | |
| 3259 | core::ptr::non_null::NonNull | byte_add | function | ||
| 3260 | core::ptr::non_null::NonNull | byte_offset | function | ||
| 3261 | core::ptr::non_null::NonNull | byte_offset_from | function | ||
| 3262 | core::ptr::non_null::NonNull | byte_offset_from_unsigned | function | ||
| 3263 | core::ptr::non_null::NonNull | byte_sub | function | ||
| 3264 | core::ptr::non_null::NonNull | copy_from | function | ||
| 3265 | core::ptr::non_null::NonNull | copy_from_nonoverlapping | function | ||
| 3266 | core::ptr::non_null::NonNull | copy_to | function | ||
| 3267 | core::ptr::non_null::NonNull | copy_to_nonoverlapping | function | ||
| 3268 | core::ptr::non_null::NonNull | drop_in_place | function | ||
| 3269 | core::ptr::non_null::NonNull | get_unchecked_mut | function | ||
| 3270 | core::ptr::non_null::NonNull | new_unchecked | function | `ptr` must be non-null. | |
| 3271 | core::ptr::non_null::NonNull | offset | function | If any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation | |
| 3272 | core::ptr::non_null::NonNull | offset_from | function | If any of the following conditions are violated, the result is Undefined Behavior: * `self` and `origin` must either * point to the same address, or * both be *derived from* a pointer to the same [allocation], and the memory range between the two pointers must be in bounds of that object. (See below for an example.) * The distance between the pointers, in bytes, must be an exact multiple of the size of `T`. As a consequence, the absolute distance between the pointers, in bytes, computed on mathematical integers (without "wrapping around"), cannot overflow an `isize`. This is implied by the in-bounds requirement, and the fact that no allocation can be larger than `isize::MAX` bytes. The requirement for pointers to be derived from the same allocation is primarily needed for `const`-compatibility: the distance between pointers into *different* allocated objects is not known at compile-time. However, the requirement also exists at runtime and may be exploited by optimizations. If you wish to compute the difference between pointers that are not guaranteed to be from the same allocation, use `(self as isize - origin as isize) / size_of::<T>()`. [`add`]: #method.add [allocation]: crate::ptr#allocation | |
| 3273 | core::ptr::non_null::NonNull | offset_from_unsigned | function | - The distance between the pointers must be non-negative (`self >= origin`) - *All* the safety conditions of [`offset_from`](#method.offset_from) apply to this method as well; see it for the full details. Importantly, despite the return type of this method being able to represent a larger offset, it's still *not permitted* to pass pointers which differ by more than `isize::MAX` *bytes*. As such, the result of this method will always be less than or equal to `isize::MAX as usize`. | |
| 3274 | core::ptr::non_null::NonNull | read | function | ||
| 3275 | core::ptr::non_null::NonNull | read_unaligned | function | ||
| 3276 | core::ptr::non_null::NonNull | read_volatile | function | ||
| 3277 | core::ptr::non_null::NonNull | replace | function | ||
| 3278 | core::ptr::non_null::NonNull | sub | function | If any of the following conditions are violated, the result is Undefined Behavior: * The computed offset, `count * size_of::<T>()` bytes, must not overflow `isize`. * If the computed offset is non-zero, then `self` must be derived from a pointer to some [allocation], and the entire memory range between `self` and the result must be in bounds of that allocation. In particular, this range must not "wrap around" the edge of the address space. Allocations can never be larger than `isize::MAX` bytes, so if the computed offset stays in bounds of the allocation, it is guaranteed to satisfy the first requirement. This implies, for instance, that `vec.as_ptr().add(vec.len())` (for `vec: Vec<T>`) is always safe. [allocation]: crate::ptr#allocation | |
| 3279 | core::ptr::non_null::NonNull | swap | function | ||
| 3280 | core::ptr::non_null::NonNull | write | function | ||
| 3281 | core::ptr::non_null::NonNull | write_bytes | function | ||
| 3282 | core::ptr::non_null::NonNull | write_unaligned | function | ||
| 3283 | core::ptr::non_null::NonNull | write_volatile | function | ||
| 3284 | core::result::Result | unwrap_err_unchecked | function | Calling this method on an [`Ok`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3285 | core::result::Result | unwrap_unchecked | function | Calling this method on an [`Err`] is *[undefined behavior]*. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3286 | core::slice | GetDisjointMutIndex | trait | If `is_in_bounds()` returns `true` and `is_overlapping()` returns `false`, it must be safe to index the slice with the indices. | |
| 3287 | core::slice | align_to | function | This method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here. | |
| 3288 | core::slice | align_to_mut | function | This method is essentially a `transmute` with respect to the elements in the returned middle slice, so all the usual caveats pertaining to `transmute::<T, U>` also apply here. | |
| 3289 | core::slice | as_ascii_unchecked | function | Every byte in the slice must be in `0..=127`, or else this is UB. | |
| 3290 | core::slice | as_chunks_unchecked | function | This may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`. | |
| 3291 | core::slice | as_chunks_unchecked_mut | function | This may only be called when - The slice splits exactly into `N`-element chunks (aka `self.len() % N == 0`). - `N != 0`. | |
| 3292 | core::slice | assume_init_drop | function | It is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. Calling this when the content is not yet fully initialized causes undefined behavior. On top of that, all additional invariants of the type `T` must be satisfied, as the `Drop` implementation of `T` (or its members) may rely on this. For example, setting a `Vec<T>` to an invalid but non-null address makes it initialized (under the current implementation; this does not constitute a stable guarantee), because the only requirement the compiler knows about it is that the data pointer must be non-null. Dropping such a `Vec<T>` however will cause undefined behaviour. | |
| 3293 | core::slice | assume_init_mut | function | Calling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. For instance, `.assume_init_mut()` cannot be used to initialize a `MaybeUninit` slice. | |
| 3294 | core::slice | assume_init_ref | function | Calling this when the content is not yet fully initialized causes undefined behavior: it is up to the caller to guarantee that every `MaybeUninit<T>` in the slice really is in an initialized state. | |
| 3295 | core::slice | get_disjoint_unchecked_mut | function | Calling this method with overlapping or out-of-bounds indices is *[undefined behavior]* even if the resulting references are not used. | |
| 3296 | core::slice | get_unchecked | function | Calling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get(index).unwrap_unchecked()`. It's UB to call `.get_unchecked(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked(..len + 1)`, `.get_unchecked(..=len)`, or similar. [`get`]: slice::get [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3297 | core::slice | get_unchecked_mut | function | Calling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. You can think of this like `.get_mut(index).unwrap_unchecked()`. It's UB to call `.get_unchecked_mut(len)`, even if you immediately convert to a pointer. And it's UB to call `.get_unchecked_mut(..len + 1)`, `.get_unchecked_mut(..=len)`, or similar. [`get_mut`]: slice::get_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3298 | core::slice | split_at_mut_unchecked | function | Calling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at_mut`]: slice::split_at_mut [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3299 | core::slice | split_at_unchecked | function | Calling this method with an out-of-bounds index is *[undefined behavior]* even if the resulting reference is not used. The caller has to ensure that `0 <= mid <= self.len()`. [`split_at`]: slice::split_at [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3300 | core::slice | swap_unchecked | function | Calling this method with an out-of-bounds index is *[undefined behavior]*. The caller has to ensure that `a < self.len()` and `b < self.len()`. | |
| 3301 | core::slice::index | SliceIndex | trait | ||
| 3302 | core::slice::raw | from_mut_ptr_range | function | Behavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_mut_ptr_range`] fulfills these requirements. | |
| 3303 | core::slice::raw | from_ptr_range | function | Behavior is undefined if any of the following conditions are violated: * The `start` pointer of the range must be a non-null, [valid] and properly aligned pointer to the first element of a slice. * The `end` pointer must be a [valid] and properly aligned pointer to *one past* the last element, such that the offset from the end to the start pointer is the length of the slice. * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * The range must contain `N` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total length of the range must be no larger than `isize::MAX`, and adding that size to `start` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. Note that a range created from [`slice::as_ptr_range`] fulfills these requirements. | |
| 3304 | core::slice::raw | from_raw_parts | function | Behavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for reads for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. See [below](#incorrect-usage) for an example incorrectly not taking this into account. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be mutated for the duration of lifetime `'a`, except inside an `UnsafeCell`. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. | |
| 3305 | core::slice::raw | from_raw_parts_mut | function | Behavior is undefined if any of the following conditions are violated: * `data` must be non-null, [valid] for both reads and writes for `len * size_of::<T>()` many bytes, and it must be properly aligned. This means in particular: * The entire memory range of this slice must be contained within a single allocation! Slices can never span across multiple allocations. * `data` must be non-null and aligned even for zero-length slices or slices of ZSTs. One reason for this is that enum layout optimizations may rely on references (including slices of any length) being aligned and non-null to distinguish them from other data. You can obtain a pointer that is usable as `data` for zero-length slices using [`NonNull::dangling()`]. * `data` must point to `len` consecutive properly initialized values of type `T`. * The memory referenced by the returned slice must not be accessed through any other pointer (not derived from the return value) for the duration of lifetime `'a`. Both read and write accesses are forbidden. * The total size `len * size_of::<T>()` of the slice must be no larger than `isize::MAX`, and adding that size to `data` must not "wrap around" the address space. See the safety documentation of [`pointer::offset`]. [valid]: ptr#safety [`NonNull::dangling()`]: ptr::NonNull::dangling | |
| 3306 | core::str | as_ascii_unchecked | function | Every character in this string must be ASCII, or else this is UB. | |
| 3307 | core::str | as_bytes_mut | function | The caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior. | |
| 3308 | core::str | from_utf8_unchecked | function | The bytes passed in must be valid UTF-8. | |
| 3309 | core::str | from_utf8_unchecked_mut | function | ||
| 3310 | core::str | get_unchecked | function | Callers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type. | |
| 3311 | core::str | get_unchecked_mut | function | Callers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type. | |
| 3312 | core::str | slice_mut_unchecked | function | Callers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries. | |
| 3313 | core::str | slice_unchecked | function | Callers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries. | |
| 3314 | core::str::converts | from_raw_parts | function | ||
| 3315 | core::str::converts | from_raw_parts_mut | function | ||
| 3316 | core::str::converts | from_utf8_unchecked | function | The bytes passed in must be valid UTF-8. | |
| 3317 | core::str::converts | from_utf8_unchecked_mut | function | ||
| 3318 | core::str::pattern | ReverseSearcher | trait | ||
| 3319 | core::str::pattern | Searcher | trait | ||
| 3320 | core::str::validations | next_code_point | function | `bytes` must produce a valid UTF-8-like (UTF-8 or WTF-8) string | |
| 3321 | core::sync::atomic | AtomicPrimitive | trait | ||
| 3322 | core::sync::atomic::Atomic | from_ptr | function | * `ptr` must be aligned to `align_of::<AtomicU8>()` (note that this is always true, since `align_of::<AtomicU8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicIsize>()` (note that on some platforms this can be bigger than `align_of::<isize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicI32>()` (note that on some platforms this can be bigger than `align_of::<i32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicU64>()` (note that on some platforms this can be bigger than `align_of::<u64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicU16>()` (note that on some platforms this can be bigger than `align_of::<u16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this can be bigger than `align_of::<*mut T>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicI8>()` (note that this is always true, since `align_of::<AtomicI8>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicI64>()` (note that on some platforms this can be bigger than `align_of::<i64>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that this is always true, since `align_of::<AtomicBool>() == 1`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicI16>()` (note that on some platforms this can be bigger than `align_of::<i16>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicUsize>()` (note that on some platforms this can be bigger than `align_of::<usize>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses * `ptr` must be aligned to `align_of::<AtomicU32>()` (note that on some platforms this can be bigger than `align_of::<u32>()`). * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`. * You must adhere to the [Memory model for atomic accesses]. In particular, it is not allowed to mix conflicting atomic and non-atomic accesses, or atomic accesses of different sizes, without synchronization. [valid]: crate::ptr#safety [Memory model for atomic accesses]: self#memory-model-for-atomic-accesses | |
| 3323 | core::task::wake::LocalWaker | from_raw | function | ||
| 3324 | core::task::wake::LocalWaker | new | function | The behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld. | |
| 3325 | core::task::wake::Waker | from_raw | function | The behavior of the returned `Waker` is undefined if the contract defined in [`RawWaker`]'s and [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html | |
| 3326 | core::task::wake::Waker | new | function | The behavior of the returned `Waker` is undefined if the contract defined in [`RawWakerVTable`]'s documentation is not upheld. (Authors wishing to avoid unsafe code may implement the [`Wake`] trait instead, at the cost of a required heap allocation.) [`Wake`]: ../../alloc/task/trait.Wake.html | |
| 3327 | core::u128 | unchecked_add | function | This results in undefined behavior when `self + rhs > u128::MAX` or `self + rhs < u128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add | |
| 3328 | core::u128 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3329 | core::u128 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3330 | core::u128 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u128::MAX` or `self * rhs < u128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul | |
| 3331 | core::u128 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl | |
| 3332 | core::u128 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`. | |
| 3333 | core::u128 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr | |
| 3334 | core::u128 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`. | |
| 3335 | core::u128 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u128::MAX` or `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub | |
| 3336 | core::u16 | unchecked_add | function | This results in undefined behavior when `self + rhs > u16::MAX` or `self + rhs < u16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add | |
| 3337 | core::u16 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3338 | core::u16 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3339 | core::u16 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u16::MAX` or `self * rhs < u16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul | |
| 3340 | core::u16 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl | |
| 3341 | core::u16 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`. | |
| 3342 | core::u16 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr | |
| 3343 | core::u16 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`. | |
| 3344 | core::u16 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u16::MAX` or `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub | |
| 3345 | core::u32 | unchecked_add | function | This results in undefined behavior when `self + rhs > u32::MAX` or `self + rhs < u32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add | |
| 3346 | core::u32 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3347 | core::u32 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3348 | core::u32 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u32::MAX` or `self * rhs < u32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul | |
| 3349 | core::u32 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl | |
| 3350 | core::u32 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`. | |
| 3351 | core::u32 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr | |
| 3352 | core::u32 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`. | |
| 3353 | core::u32 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u32::MAX` or `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub | |
| 3354 | core::u64 | unchecked_add | function | This results in undefined behavior when `self + rhs > u64::MAX` or `self + rhs < u64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add | |
| 3355 | core::u64 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3356 | core::u64 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3357 | core::u64 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u64::MAX` or `self * rhs < u64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul | |
| 3358 | core::u64 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl | |
| 3359 | core::u64 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`. | |
| 3360 | core::u64 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr | |
| 3361 | core::u64 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`. | |
| 3362 | core::u64 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u64::MAX` or `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub | |
| 3363 | core::u8 | as_ascii_unchecked | function | This byte must be valid ASCII, or else this is UB. | |
| 3364 | core::u8 | unchecked_add | function | This results in undefined behavior when `self + rhs > u8::MAX` or `self + rhs < u8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add | |
| 3365 | core::u8 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3366 | core::u8 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3367 | core::u8 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u8::MAX` or `self * rhs < u8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul | |
| 3368 | core::u8 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl | |
| 3369 | core::u8 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`. | |
| 3370 | core::u8 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr | |
| 3371 | core::u8 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`. | |
| 3372 | core::u8 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u8::MAX` or `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub | |
| 3373 | core::usize | unchecked_add | function | This results in undefined behavior when `self + rhs > usize::MAX` or `self + rhs < usize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add | |
| 3374 | core::usize | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3375 | core::usize | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3376 | core::usize | unchecked_mul | function | This results in undefined behavior when `self * rhs > usize::MAX` or `self * rhs < usize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul | |
| 3377 | core::usize | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl | |
| 3378 | core::usize | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`. | |
| 3379 | core::usize | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr | |
| 3380 | core::usize | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`. | |
| 3381 | core::usize | unchecked_sub | function | This results in undefined behavior when `self - rhs > usize::MAX` or `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub | |
| 3382 | std::char | as_ascii_unchecked | function | This char must be within the ASCII range, or else this is UB. | |
| 3383 | std::char | from_u32_unchecked | function | This function is unsafe, as it may construct invalid `char` values. For a safe version of this function, see the [`from_u32`] function. [`from_u32`]: #method.from_u32 | |
| 3384 | std::collections::hash::map::HashMap | get_disjoint_unchecked_mut | function | Calling this method with overlapping keys is *[undefined behavior]* even if the resulting references are not used. [undefined behavior]: https://doc.rust-lang.org/reference/behavior-considered-undefined.html | |
| 3385 | std::env | remove_var | function | This function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To prevent a child process from inheriting an environment variable, you can instead use [`Command::env_remove`] or [`Command::env_clear`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env_remove`]: crate::process::Command::env_remove [`Command::env_clear`]: crate::process::Command::env_clear | |
| 3386 | std::env | set_var | function | This function is safe to call in a single-threaded program. This function is also always safe to call on Windows, in single-threaded and multi-threaded programs. In multi-threaded programs on other operating systems, the only safe option is to not use `set_var` or `remove_var` at all. The exact requirement is: you must ensure that there are no other threads concurrently writing or *reading*(!) the environment through functions or global variables other than the ones in this module. The problem is that these operating systems do not provide a thread-safe way to read the environment, and most C libraries, including libc itself, do not advertise which functions read from the environment. Even functions from the Rust standard library may read the environment without going through this module, e.g. for DNS lookups from [`std::net::ToSocketAddrs`]. No stable guarantee is made about which functions may read from the environment in future versions of a library. All this makes it not practically possible for you to guarantee that no other thread will read the environment, so the only safe option is to not use `set_var` or `remove_var` in multi-threaded programs at all. Discussion of this unsafety on Unix may be found in: - [Austin Group Bugzilla (for POSIX)](https://austingroupbugs.net/view.php?id=188) - [GNU C library Bugzilla](https://sourceware.org/bugzilla/show_bug.cgi?id=15607#c2) To pass an environment variable to a child process, you can instead use [`Command::env`]. [`std::net::ToSocketAddrs`]: crate::net::ToSocketAddrs [`Command::env`]: crate::process::Command::env | |
| 3387 | std::f128 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 3388 | std::f16 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 3389 | std::f32 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 3390 | std::f64 | to_int_unchecked | function | The value must: * Not be `NaN` * Not be infinite * Be representable in the return type `Int`, after truncating off its fractional part | |
| 3391 | std::ffi::os_str::OsStr | from_encoded_bytes_unchecked | function | As the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsStr` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring. | |
| 3392 | std::ffi::os_str::OsString | from_encoded_bytes_unchecked | function | As the encoding is unspecified, callers must pass in bytes that originated as a mixture of validated UTF-8 and bytes from [`OsStr::as_encoded_bytes`] from within the same Rust version built for the same target platform. For example, reconstructing an `OsString` from bytes sent over the network or stored in a file will likely violate these safety rules. Due to the encoding being self-synchronizing, the bytes from [`OsStr::as_encoded_bytes`] can be split either immediately before or immediately after any valid non-empty UTF-8 substring. | |
| 3393 | std::i128 | unchecked_add | function | This results in undefined behavior when `self + rhs > i128::MAX` or `self + rhs < i128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i128::checked_add [`wrapping_add`]: i128::wrapping_add | |
| 3394 | std::i128 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i128::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3395 | std::i128 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i128::MAX` or `self * rhs < i128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i128::checked_mul [`wrapping_mul`]: i128::wrapping_mul | |
| 3396 | std::i128 | unchecked_neg | function | This results in undefined behavior when `self == i128::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i128::checked_neg | |
| 3397 | std::i128 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i128::checked_shl | |
| 3398 | std::i128 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i128::shl_exact`] would return `None`. | |
| 3399 | std::i128 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i128::checked_shr | |
| 3400 | std::i128 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i128::BITS` i.e. when [`i128::shr_exact`] would return `None`. | |
| 3401 | std::i128 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i128::MAX` or `self - rhs < i128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i128::checked_sub [`wrapping_sub`]: i128::wrapping_sub | |
| 3402 | std::i16 | unchecked_add | function | This results in undefined behavior when `self + rhs > i16::MAX` or `self + rhs < i16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i16::checked_add [`wrapping_add`]: i16::wrapping_add | |
| 3403 | std::i16 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i16::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3404 | std::i16 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i16::MAX` or `self * rhs < i16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i16::checked_mul [`wrapping_mul`]: i16::wrapping_mul | |
| 3405 | std::i16 | unchecked_neg | function | This results in undefined behavior when `self == i16::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i16::checked_neg | |
| 3406 | std::i16 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i16::checked_shl | |
| 3407 | std::i16 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i16::shl_exact`] would return `None`. | |
| 3408 | std::i16 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i16::checked_shr | |
| 3409 | std::i16 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i16::BITS` i.e. when [`i16::shr_exact`] would return `None`. | |
| 3410 | std::i16 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i16::MAX` or `self - rhs < i16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i16::checked_sub [`wrapping_sub`]: i16::wrapping_sub | |
| 3411 | std::i32 | unchecked_add | function | This results in undefined behavior when `self + rhs > i32::MAX` or `self + rhs < i32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i32::checked_add [`wrapping_add`]: i32::wrapping_add | |
| 3412 | std::i32 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i32::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3413 | std::i32 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i32::MAX` or `self * rhs < i32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i32::checked_mul [`wrapping_mul`]: i32::wrapping_mul | |
| 3414 | std::i32 | unchecked_neg | function | This results in undefined behavior when `self == i32::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i32::checked_neg | |
| 3415 | std::i32 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i32::checked_shl | |
| 3416 | std::i32 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i32::shl_exact`] would return `None`. | |
| 3417 | std::i32 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i32::checked_shr | |
| 3418 | std::i32 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i32::BITS` i.e. when [`i32::shr_exact`] would return `None`. | |
| 3419 | std::i32 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i32::MAX` or `self - rhs < i32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i32::checked_sub [`wrapping_sub`]: i32::wrapping_sub | |
| 3420 | std::i64 | unchecked_add | function | This results in undefined behavior when `self + rhs > i64::MAX` or `self + rhs < i64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i64::checked_add [`wrapping_add`]: i64::wrapping_add | |
| 3421 | std::i64 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i64::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3422 | std::i64 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i64::MAX` or `self * rhs < i64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i64::checked_mul [`wrapping_mul`]: i64::wrapping_mul | |
| 3423 | std::i64 | unchecked_neg | function | This results in undefined behavior when `self == i64::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i64::checked_neg | |
| 3424 | std::i64 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i64::checked_shl | |
| 3425 | std::i64 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i64::shl_exact`] would return `None`. | |
| 3426 | std::i64 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i64::checked_shr | |
| 3427 | std::i64 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i64::BITS` i.e. when [`i64::shr_exact`] would return `None`. | |
| 3428 | std::i64 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i64::MAX` or `self - rhs < i64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i64::checked_sub [`wrapping_sub`]: i64::wrapping_sub | |
| 3429 | std::i8 | unchecked_add | function | This results in undefined behavior when `self + rhs > i8::MAX` or `self + rhs < i8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: i8::checked_add [`wrapping_add`]: i8::wrapping_add | |
| 3430 | std::i8 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == i8::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3431 | std::i8 | unchecked_mul | function | This results in undefined behavior when `self * rhs > i8::MAX` or `self * rhs < i8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: i8::checked_mul [`wrapping_mul`]: i8::wrapping_mul | |
| 3432 | std::i8 | unchecked_neg | function | This results in undefined behavior when `self == i8::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: i8::checked_neg | |
| 3433 | std::i8 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: i8::checked_shl | |
| 3434 | std::i8 | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`i8::shl_exact`] would return `None`. | |
| 3435 | std::i8 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: i8::checked_shr | |
| 3436 | std::i8 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= i8::BITS` i.e. when [`i8::shr_exact`] would return `None`. | |
| 3437 | std::i8 | unchecked_sub | function | This results in undefined behavior when `self - rhs > i8::MAX` or `self - rhs < i8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: i8::checked_sub [`wrapping_sub`]: i8::wrapping_sub | |
| 3438 | std::isize | unchecked_add | function | This results in undefined behavior when `self + rhs > isize::MAX` or `self + rhs < isize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: isize::checked_add [`wrapping_add`]: isize::wrapping_add | |
| 3439 | std::isize | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0`, `self % rhs != 0`, or `self == isize::MIN && rhs == -1`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3440 | std::isize | unchecked_mul | function | This results in undefined behavior when `self * rhs > isize::MAX` or `self * rhs < isize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: isize::checked_mul [`wrapping_mul`]: isize::wrapping_mul | |
| 3441 | std::isize | unchecked_neg | function | This results in undefined behavior when `self == isize::MIN`, i.e. when [`checked_neg`] would return `None`. [`checked_neg`]: isize::checked_neg | |
| 3442 | std::isize | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: isize::checked_shl | |
| 3443 | std::isize | unchecked_shl_exact | function | This results in undefined behavior when `rhs >= self.leading_zeros() && rhs >= self.leading_ones()` i.e. when [`isize::shl_exact`] would return `None`. | |
| 3444 | std::isize | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: isize::checked_shr | |
| 3445 | std::isize | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= isize::BITS` i.e. when [`isize::shr_exact`] would return `None`. | |
| 3446 | std::isize | unchecked_sub | function | This results in undefined behavior when `self - rhs > isize::MAX` or `self - rhs < isize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: isize::checked_sub [`wrapping_sub`]: isize::wrapping_sub | |
| 3447 | std::os::fd::owned::BorrowedFd | borrow_raw | function | The resource pointed to by `fd` must remain open for the duration of the returned `BorrowedFd`. | |
| 3448 | std::os::windows::io::handle::BorrowedHandle | borrow_raw | function | The resource pointed to by `handle` must be a valid open handle, it must remain open for the duration of the returned `BorrowedHandle`. Note that it *may* have the value `INVALID_HANDLE_VALUE` (-1), which is sometimes a valid handle value. See [here] for the full story. And, it *may* have the value `NULL` (0), which can occur when consoles are detached from processes, or when `windows_subsystem` is used. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443 | |
| 3449 | std::os::windows::io::handle::HandleOrInvalid | from_raw_handle | function | The passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be `INVALID_HANDLE_VALUE` (-1). Note that not all Windows APIs use `INVALID_HANDLE_VALUE` for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443 | |
| 3450 | std::os::windows::io::handle::HandleOrNull | from_raw_handle | function | The passed `handle` value must either satisfy the safety requirements of [`FromRawHandle::from_raw_handle`], or be null. Note that not all Windows APIs use null for errors; see [here] for the full story. [here]: https://devblogs.microsoft.com/oldnewthing/20040302-00/?p=40443 | |
| 3451 | std::os::windows::io::socket::BorrowedSocket | borrow_raw | function | The resource pointed to by `socket` must remain open for the duration of the returned `BorrowedSocket`, and it must not have the value `INVALID_SOCKET`. | |
| 3452 | std::os::windows::process::ProcThreadAttributeListBuilder | raw_attribute | function | This function is marked as `unsafe` because it deals with raw pointers and sizes. It is the responsibility of the caller to ensure the value lives longer than the resulting [`ProcThreadAttributeList`] as well as the validity of the size parameter. | |
| 3453 | std::str | as_ascii_unchecked | function | Every character in this string must be ASCII, or else this is UB. | |
| 3454 | std::str | as_bytes_mut | function | The caller must ensure that the content of the slice is valid UTF-8 before the borrow ends and the underlying `str` is used. Use of a `str` whose contents are not valid UTF-8 is undefined behavior. | |
| 3455 | std::str | from_utf8_unchecked | function | The bytes passed in must be valid UTF-8. | |
| 3456 | std::str | from_utf8_unchecked_mut | function | ||
| 3457 | std::str | get_unchecked | function | Callers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type. | |
| 3458 | std::str | get_unchecked_mut | function | Callers of this function are responsible that these preconditions are satisfied: * The starting index must not exceed the ending index; * Indexes must be within bounds of the original slice; * Indexes must lie on UTF-8 sequence boundaries. Failing that, the returned string slice may reference invalid memory or violate the invariants communicated by the `str` type. | |
| 3459 | std::str | slice_mut_unchecked | function | Callers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries. | |
| 3460 | std::str | slice_unchecked | function | Callers of this function are responsible that three preconditions are satisfied: * `begin` must not exceed `end`. * `begin` and `end` must be byte positions within the string slice. * `begin` and `end` must lie on UTF-8 sequence boundaries. | |
| 3461 | std::thread::builder::Builder | spawn_unchecked | function | The caller has to ensure that the spawned thread does not outlive any references in the supplied thread closure and its return type. This can be guaranteed in two ways: - ensure that [`join`][`JoinHandle::join`] is called before any referenced data is dropped - use only types with `'static` lifetime bounds, i.e., those with no or only `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`] and [`thread::spawn`] enforce this property statically) | |
| 3462 | std::thread::thread::Thread | from_raw | function | This function is unsafe because improper use may lead to memory unsafety, even if the returned `Thread` is never accessed. Creating a `Thread` from a pointer other than one returned from [`Thread::into_raw`] is **undefined behavior**. Calling this function twice on the same raw pointer can lead to a double-free if both `Thread` instances are dropped. | |
| 3463 | std::u128 | unchecked_add | function | This results in undefined behavior when `self + rhs > u128::MAX` or `self + rhs < u128::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u128::checked_add [`wrapping_add`]: u128::wrapping_add | |
| 3464 | std::u128 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3465 | std::u128 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3466 | std::u128 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u128::MAX` or `self * rhs < u128::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u128::checked_mul [`wrapping_mul`]: u128::wrapping_mul | |
| 3467 | std::u128 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u128::checked_shl | |
| 3468 | std::u128 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u128::BITS` i.e. when [`u128::shl_exact`] would return `None`. | |
| 3469 | std::u128 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u128::checked_shr | |
| 3470 | std::u128 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u128::BITS` i.e. when [`u128::shr_exact`] would return `None`. | |
| 3471 | std::u128 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u128::MAX` or `self - rhs < u128::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u128::checked_sub [`wrapping_sub`]: u128::wrapping_sub | |
| 3472 | std::u16 | unchecked_add | function | This results in undefined behavior when `self + rhs > u16::MAX` or `self + rhs < u16::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u16::checked_add [`wrapping_add`]: u16::wrapping_add | |
| 3473 | std::u16 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3474 | std::u16 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3475 | std::u16 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u16::MAX` or `self * rhs < u16::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u16::checked_mul [`wrapping_mul`]: u16::wrapping_mul | |
| 3476 | std::u16 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u16::checked_shl | |
| 3477 | std::u16 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u16::BITS` i.e. when [`u16::shl_exact`] would return `None`. | |
| 3478 | std::u16 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u16::checked_shr | |
| 3479 | std::u16 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u16::BITS` i.e. when [`u16::shr_exact`] would return `None`. | |
| 3480 | std::u16 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u16::MAX` or `self - rhs < u16::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u16::checked_sub [`wrapping_sub`]: u16::wrapping_sub | |
| 3481 | std::u32 | unchecked_add | function | This results in undefined behavior when `self + rhs > u32::MAX` or `self + rhs < u32::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u32::checked_add [`wrapping_add`]: u32::wrapping_add | |
| 3482 | std::u32 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3483 | std::u32 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3484 | std::u32 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u32::MAX` or `self * rhs < u32::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u32::checked_mul [`wrapping_mul`]: u32::wrapping_mul | |
| 3485 | std::u32 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u32::checked_shl | |
| 3486 | std::u32 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u32::BITS` i.e. when [`u32::shl_exact`] would return `None`. | |
| 3487 | std::u32 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u32::checked_shr | |
| 3488 | std::u32 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u32::BITS` i.e. when [`u32::shr_exact`] would return `None`. | |
| 3489 | std::u32 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u32::MAX` or `self - rhs < u32::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u32::checked_sub [`wrapping_sub`]: u32::wrapping_sub | |
| 3490 | std::u64 | unchecked_add | function | This results in undefined behavior when `self + rhs > u64::MAX` or `self + rhs < u64::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u64::checked_add [`wrapping_add`]: u64::wrapping_add | |
| 3491 | std::u64 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3492 | std::u64 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3493 | std::u64 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u64::MAX` or `self * rhs < u64::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u64::checked_mul [`wrapping_mul`]: u64::wrapping_mul | |
| 3494 | std::u64 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u64::checked_shl | |
| 3495 | std::u64 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u64::BITS` i.e. when [`u64::shl_exact`] would return `None`. | |
| 3496 | std::u64 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u64::checked_shr | |
| 3497 | std::u64 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u64::BITS` i.e. when [`u64::shr_exact`] would return `None`. | |
| 3498 | std::u64 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u64::MAX` or `self - rhs < u64::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u64::checked_sub [`wrapping_sub`]: u64::wrapping_sub | |
| 3499 | std::u8 | as_ascii_unchecked | function | This byte must be valid ASCII, or else this is UB. | |
| 3500 | std::u8 | unchecked_add | function | This results in undefined behavior when `self + rhs > u8::MAX` or `self + rhs < u8::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: u8::checked_add [`wrapping_add`]: u8::wrapping_add | |
| 3501 | std::u8 | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3502 | std::u8 | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3503 | std::u8 | unchecked_mul | function | This results in undefined behavior when `self * rhs > u8::MAX` or `self * rhs < u8::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: u8::checked_mul [`wrapping_mul`]: u8::wrapping_mul | |
| 3504 | std::u8 | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: u8::checked_shl | |
| 3505 | std::u8 | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= u8::BITS` i.e. when [`u8::shl_exact`] would return `None`. | |
| 3506 | std::u8 | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: u8::checked_shr | |
| 3507 | std::u8 | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= u8::BITS` i.e. when [`u8::shr_exact`] would return `None`. | |
| 3508 | std::u8 | unchecked_sub | function | This results in undefined behavior when `self - rhs > u8::MAX` or `self - rhs < u8::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: u8::checked_sub [`wrapping_sub`]: u8::wrapping_sub | |
| 3509 | std::usize | unchecked_add | function | This results in undefined behavior when `self + rhs > usize::MAX` or `self + rhs < usize::MIN`, i.e. when [`checked_add`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_add`]: usize::checked_add [`wrapping_add`]: usize::wrapping_add | |
| 3510 | std::usize | unchecked_disjoint_bitor | function | Requires that `(self & other) == 0`, otherwise it's immediate UB. Equivalently, requires that `(self | other) == (self + other)`. | |
| 3511 | std::usize | unchecked_div_exact | function | This results in undefined behavior when `rhs == 0` or `self % rhs != 0`, i.e. when [`checked_div_exact`](Self::checked_div_exact) would return `None`. | |
| 3512 | std::usize | unchecked_mul | function | This results in undefined behavior when `self * rhs > usize::MAX` or `self * rhs < usize::MIN`, i.e. when [`checked_mul`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_mul`]: usize::checked_mul [`wrapping_mul`]: usize::wrapping_mul | |
| 3513 | std::usize | unchecked_shl | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shl`] would return `None`. [`checked_shl`]: usize::checked_shl | |
| 3514 | std::usize | unchecked_shl_exact | function | This results in undefined behavior when `rhs > self.leading_zeros() || rhs >= usize::BITS` i.e. when [`usize::shl_exact`] would return `None`. | |
| 3515 | std::usize | unchecked_shr | function | This results in undefined behavior if `rhs` is larger than or equal to the number of bits in `self`, i.e. when [`checked_shr`] would return `None`. [`checked_shr`]: usize::checked_shr | |
| 3516 | std::usize | unchecked_shr_exact | function | This results in undefined behavior when `rhs > self.trailing_zeros() || rhs >= usize::BITS` i.e. when [`usize::shr_exact`] would return `None`. | |
| 3517 | std::usize | unchecked_sub | function | This results in undefined behavior when `self - rhs > usize::MAX` or `self - rhs < usize::MIN`, i.e. when [`checked_sub`] would return `None`. [`unwrap_unchecked`]: option/enum.Option.html#method.unwrap_unchecked [`checked_sub`]: usize::checked_sub [`wrapping_sub`]: usize::wrapping_sub |