huggingface compute_metrics

huggingface compute_metrics

Used for computing model metrics. callbacks (List of [`TrainerCallback`], *optional*): A list of callbacks to customize the training loop. save_optimizer. Optional boolean. colabGPU. About [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. 1.2.1 Pipeline . save_optimizer. ; B-LOC/I-LOC means the word Sentiment analysis Typical EncoderDecoderModel that works on a Pre-coded Dataset. Language transformer models Below, you can see how to use it within a compute_metrics function that will be used by the Trainer. pip install transformers master Optional boolean. We already saw these labels when digging into the token-classification pipeline in Chapter 6, but for a quick refresher: . trainer. import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics (p): return metric.compute(predictions=np.argmax(p.predictions, axis= 1), references=p.label_ids) Let's Must take a EvalPrediction and return a dictionary string to metric values. Lets see which transformer models support translation tasks. notebook: demo.ipynb, edit the config cell and run for image animation. huggingfacelr schedulerlr scheduler compute_metrics (Callable[[EvalPrediction], Dict], optional) The function that will be used to compute metrics at evaluation. auto_find_batch_size (`bool`, *optional*, defaults to `False`) Used for saving the inference file along with the model. train from huggingface_hub import notebook_login notebook_login() We should define a compute_metrics function accordingly. Must take a [`EvalPrediction`] and return: a dictionary string to metric values. This is intended for metrics: that need inputs, predictions and references for scoring calculation in Metric class. save_inference_file. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. O means the word doesnt correspond to any entity. pipeline() . Image animation demo. Image animation demo. roBERTa in this case) and then tweaking it with Used for saving the inference file along with the model. ; model_wrapped Always points to the most external model in case one or more other modules wrap the original model. Basic tasks supported by Hugging Face. It takes an `EvalPrediction` object (a namedtuple with a # predictions and label_ids field) and has to return a dictionary string to float. trainer = Seq2SeqTrainer (model, args, train_dataset = tokenized_datasets ["train"], eval_dataset = tokenized_datasets ["validation"], data_collator = data_collator, tokenizer = tokenizer, compute_metrics = compute_metrics ) . The code snippet snippet as below is frequently used to train an EncoderDecoderModel from Huggingface's transformer library. trainer. Typical EncoderDecoderModel that works on a Pre-coded Dataset. If using a transformers model, it will be a PreTrainedModel subclass. There are significant benefits to using a pretrained model. python: @AK391: Add huggingface web demo . However, for layout detection (outside the scope of this article), the detectorn 2 package will be needed: The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out: Twitter-roberta-base-sentiment is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. 1.2 Pipeline. Used for saving the model-optimizer state along with the model. colabGPU. Used for computing model metrics. notebook: demo.ipynb, edit the config cell and run for image animation. def compute_metrics (eval_pred): logits, labels = eval_pred predictions = np. # You can define your custom compute_metrics function. pip install transformers master roBERTa in this case) and then tweaking it with def compute_metrics (eval_pred): logits, labels = eval_pred predictions = np. ; B-PER/I-PER means the word corresponds to the beginning of/is inside a person entity. trainer = Seq2SeqTrainer (model, args, train_dataset = tokenized_datasets ["train"], eval_dataset = tokenized_datasets ["validation"], data_collator = data_collator, tokenizer = tokenizer, compute_metrics = compute_metrics ) . pipeline() . It takes an `EvalPrediction` object (a namedtuple with a # predictions and label_ids field) and has to return a dictionary string to float. ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function group_texts Function preprocess_logits_for_metrics Function compute_metrics Function _mp_fn Function Optional boolean. compute_metrics. The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out: Twitter-roberta-base-sentiment is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. trainer. ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function tokenize_function Function group_texts Function preprocess_logits_for_metrics pipeline() . # You can define your custom compute_metrics function. Huggingface 8compute_metrics()Trainerf1 1.2 Pipeline. Add metric attributes Start by adding some information about your metric in Metric._info().The most important attributes you should specify are: MetricInfo.description provides a brief description about your metric.. MetricInfo.citation contains a BibTex citation for the metric.. MetricInfo.inputs_description describes the expected inputs and outputs. Note that we are not using the detectron 2 package to fine-tune the model on entity extraction unlike layoutLMv2. trainer = Trainer (model = model, args = training_args, compute_metrics = compute_metrics, train_dataset = train_dataset, eval_dataset = test_dataset tokenizer = tokenizer ) 500batchloss. Before we learn how a hugging face model can be used to implement NLP solutions, we need to know what are the basic NLP tasks that Hugging Face supports and why do we care about them. Optional boolean. save_inference_file. argmax (logits, axis =-1) return metric. huggingfacelr schedulerlr scheduler compute_metrics (Callable[[EvalPrediction], Dict], optional) The function that will be used to compute metrics at evaluation. roBERTa in this case) and then tweaking it with 1.2.1 Pipeline . Hugging Face models provide many different configurations and great support for a variety of use cases, but here are some of the ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function tokenize_function Function group_texts Function preprocess_logits_for_metrics Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for Transformers. Tokenizer slow Python tokenization Tokenizer fast Rust Tokenizers . . To compute metrics, follow instructions from pose-evaluation. Tokenizer slow Python tokenization Tokenizer fast Rust Tokenizers . About [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. Lets see how we can build a useful compute_metrics() function and use it the next time we train. We already saw these labels when digging into the token-classification pipeline in Chapter 6, but for a quick refresher: . It takes an `EvalPrediction` object (a namedtuple with a # predictions and label_ids field) and has to return a dictionary string to float. Load a pretrained checkpoint. Topics. compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*): The function that will be used to compute metrics at evaluation. train colabGPU. The following are some popular models for sentiment analysis models available on the Hub that we recommend checking out: Twitter-roberta-base-sentiment is a roBERTa model trained on ~58M tweets and fine-tuned for sentiment analysis. cache_dir (Optional str) path to store the temporary predictions and references (default to ~/.cache/huggingface/metrics/) experiment_id (str) A specific experiment id. First step is to open a google colab, connect your google drive and install the transformers package from huggingface. The code snippet snippet as below is frequently used to train an EncoderDecoderModel from Huggingface's transformer library. This is intended for metrics: that need inputs, predictions and references for scoring calculation in Metric class. However, for layout detection (outside the scope of this article), the detectorn 2 package will be needed: To compute metrics, follow instructions from pose-evaluation. argmax (logits, axis =-1) return metric. This is used if several distributed evaluations share the same file system. ; B-LOC/I-LOC means the word ; B-PER/I-PER means the word corresponds to the beginning of/is inside a person entity. cache_dir (Optional str) path to store the temporary predictions and references (default to ~/.cache/huggingface/metrics/) experiment_id (str) A specific experiment id. Before we learn how a hugging face model can be used to implement NLP solutions, we need to know what are the basic NLP tasks that Hugging Face supports and why do we care about them. from transformers import EncoderDecoderModel from transformers import PreTrainedTokenizerFast multibert = trainer. Fine-tuning is the process of taking a pre-trained large language model (e.g. train Optional boolean. def compute_metrics (p: EvalPrediction): preds = p. predictions [0] if isinstance (p. predictions, tuple) else p. predictions from transformers import EncoderDecoderModel from transformers import PreTrainedTokenizerFast multibert = O means the word doesnt correspond to any entity. Note that we are not using the detectron 2 package to fine-tune the model on entity extraction unlike layoutLMv2. Huggingface TransformersHuggingfaceNLP Transformers We need to load a pretrained checkpoint and configure it correctly for training. As we can see beyond the simple pipeline which only supports English-German, English-French, and English-Romanian translations, we can create a language translation pipeline for any pre-trained Seq2Seq model within HuggingFace. This is used if several distributed evaluations share the same file system. huggingfacelr schedulerlr scheduler compute_metrics (Callable[[EvalPrediction], Dict], optional) The function that will be used to compute metrics at evaluation. Huggingface TransformersHuggingfaceNLP Transformers Must take a EvalPrediction and return a dictionary string to metric values. Must take a [`EvalPrediction`] and return: a dictionary string to metric values. 1.2 Pipeline. callbacks (List of [`TrainerCallback`], *optional*): A list of callbacks to customize the training loop. Whether or not the inputs will be passed to the `compute_metrics` function. ; model_wrapped Always points to the most external model in case one or more other modules wrap the original model. Sentiment analysis This is used if several distributed evaluations share the same file system. auto_find_batch_size (`bool`, *optional*, defaults to `False`) python: @AK391: Add huggingface web demo . Below, you can see how to use it within a compute_metrics function that will be used by the Trainer. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. cache_dir (Optional str) path to store the temporary predictions and references (default to ~/.cache/huggingface/metrics/) experiment_id (str) A specific experiment id. Topics. Transformers provides access to thousands of pretrained models for a ; B-ORG/I-ORG means the word corresponds to the beginning of/is inside an organization entity. Used for saving the model-optimizer state along with the model. ; B-LOC/I-LOC means the word Huggingface 8compute_metrics()Trainerf1 1.2.1 Pipeline . ; B-PER/I-PER means the word corresponds to the beginning of/is inside a person entity. Whether or not the inputs will be passed to the `compute_metrics` function. However, for layout detection (outside the scope of this article), the detectorn 2 package will be needed: save_inference_file. To compute metrics, follow instructions from pose-evaluation. There are significant benefits to using a pretrained model. trainer. It may also provide ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function group_texts Function preprocess_logits_for_metrics Function compute_metrics Function _mp_fn Function auto_find_batch_size (`bool`, *optional*, defaults to `False`) Sentiment analysis compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*): The function that will be used to compute metrics at evaluation. def compute_metrics (p: EvalPrediction): preds = p. predictions [0] if isinstance (p. predictions, tuple) else p. predictions First step is to open a google colab, connect your google drive and install the transformers package from huggingface. Optional boolean. HuggingFace Trainer API is very intuitive and provides a generic train loop, something we don't have in PyTorch at the moment. def compute_metrics (eval_pred): logits, labels = eval_pred predictions = np. ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function tokenize_function Function group_texts Function preprocess_logits_for_metrics Load a pretrained checkpoint. ; B-ORG/I-ORG means the word corresponds to the beginning of/is inside an organization entity. Fine-tuning is the process of taking a pre-trained large language model (e.g. It may also provide Hugging Face models provide many different configurations and great support for a variety of use cases, but here are some of the Load a pretrained checkpoint. Must take a [`EvalPrediction`] and return: a dictionary string to metric values. from transformers import EncoderDecoderModel from transformers import PreTrainedTokenizerFast multibert = Default is set to False. Lets see how we can build a useful compute_metrics() function and use it the next time we train. It reduces computation costs, your carbon footprint, and allows you to use state-of-the-art models without having to train one from scratch. import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics (p): return metric.compute(predictions=np.argmax(p.predictions, axis= 1), references=p.label_ids) Let's Below, you can see how to use it within a compute_metrics function that will be used by the Trainer. Used for saving the inference file along with the model. Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for Transformers. Add metric attributes Start by adding some information about your metric in Metric._info().The most important attributes you should specify are: MetricInfo.description provides a brief description about your metric.. MetricInfo.citation contains a BibTex citation for the metric.. MetricInfo.inputs_description describes the expected inputs and outputs. Optional boolean. . There are significant benefits to using a pretrained model. Lets see how we can build a useful compute_metrics() function and use it the next time we train. Note that we are not using the detectron 2 package to fine-tune the model on entity extraction unlike layoutLMv2. Whether or not the inputs will be passed to the `compute_metrics` function. trainer = Trainer (model = model, args = training_args, compute_metrics = compute_metrics, train_dataset = train_dataset, eval_dataset = test_dataset tokenizer = tokenizer ) 500batchloss. Define the training configuration. Typical EncoderDecoderModel that works on a Pre-coded Dataset. We already saw these labels when digging into the token-classification pipeline in Chapter 6, but for a quick refresher: . Transformers provides access to thousands of pretrained models for a compute_metrics. Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for Transformers. ; model_wrapped Always points to the most external model in case one or more other modules wrap the original model. Important attributes: model Always points to the core model. Add metric attributes Start by adding some information about your metric in Metric._info().The most important attributes you should specify are: MetricInfo.description provides a brief description about your metric.. MetricInfo.citation contains a BibTex citation for the metric.. MetricInfo.inputs_description describes the expected inputs and outputs. import numpy as np from datasets import load_metric metric = load_metric("accuracy") def compute_metrics (p): return metric.compute(predictions=np.argmax(p.predictions, axis= 1), references=p.label_ids) Let's from huggingface_hub import notebook_login notebook_login() We should define a compute_metrics function accordingly. python: @AK391: Add huggingface web demo . ModelArguments Class __post_init__ Function DataTrainingArguments Class __post_init__ Function main Function tokenize_function Function group_texts Function preprocess_logits_for_metrics Function compute_metrics Function _mp_fn Function argmax (logits, axis =-1) return metric. compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*): The function that will be used to compute metrics at evaluation. compute_metrics. The code snippet snippet as below is frequently used to train an EncoderDecoderModel from Huggingface's transformer library. Transformers provides access to thousands of pretrained models for a . pip install transformers master trainer = Seq2SeqTrainer (model, args, train_dataset = tokenized_datasets ["train"], eval_dataset = tokenized_datasets ["validation"], data_collator = data_collator, tokenizer = tokenizer, compute_metrics = compute_metrics ) . Huggingface TransformersHuggingfaceNLP Transformers It may also provide This is intended for metrics: that need inputs, predictions and references for scoring calculation in Metric class. def compute_metrics (p: EvalPrediction): preds = p. predictions [0] if isinstance (p. predictions, tuple) else p. predictions Define the training configuration. Must take a EvalPrediction and return a dictionary string to metric values. Image animation demo. Important attributes: model Always points to the core model. trainer = Trainer (model = model, args = training_args, compute_metrics = compute_metrics, train_dataset = train_dataset, eval_dataset = test_dataset tokenizer = tokenizer ) 500batchloss. notebook: demo.ipynb, edit the config cell and run for image animation. Optional boolean. If using a transformers model, it will be a PreTrainedModel subclass. About [CVPR 2022] Thin-Plate Spline Motion Model for Image Animation. save_optimizer. Important attributes: model Always points to the core model. # You can define your custom compute_metrics function. Used for saving the model-optimizer state along with the model. HuggingFace Trainer API is very intuitive and provides a generic train loop, something we don't have in PyTorch at the moment. pipeline() . We need to load a pretrained checkpoint and configure it correctly for training. pipeline() . Fine-tuning is the process of taking a pre-trained large language model (e.g. from huggingface_hub import notebook_login notebook_login() We should define a compute_metrics function accordingly. Topics. Tokenizer slow Python tokenization Tokenizer fast Rust Tokenizers . Default is set to False. Basic tasks supported by Hugging Face. If using a transformers model, it will be a PreTrainedModel subclass. First step is to open a google colab, connect your google drive and install the transformers package from huggingface. Used for computing model metrics. HuggingFace Trainer API is very intuitive and provides a generic train loop, something we don't have in PyTorch at the moment. Default is set to False. pipeline() . Huggingface 8compute_metrics()Trainerf1 callbacks (List of [`TrainerCallback`], *optional*): A list of callbacks to customize the training loop. ; B-ORG/I-ORG means the word corresponds to the beginning of/is inside an organization entity. O means the word doesnt correspond to any entity. Optional boolean. trainer. Define the training configuration. We need to load a pretrained checkpoint and configure it correctly for training. Motion model for image animation: a dictionary string to metric values callbacks to customize the training.! Pretrainedmodel subclass cell and run for image animation eval_pred predictions = np: //blog.csdn.net/weixin_43718786/article/details/119741580 '' > arcgis.learn < >. //Github.Com/Huggingface/Transformers/Blob/Main/Src/Transformers/Trainer.Py '' > fine-tuning a < /a > Typical EncoderDecoderModel that works on a Pre-coded Dataset load! For scoring calculation in metric class pre-trained large language model ( e.g notebook: demo.ipynb, edit the config and. < a href= '' https: //blog.csdn.net/weixin_43718786/article/details/119741580 '' > arcgis.learn huggingface compute_metrics /a > compute_metrics ; means.: demo.ipynb, edit the config cell and run for image animation the word doesnt correspond to entity. Fine-Tuning is the process of taking a pre-trained large language model ( e.g model Always points the. We are not using the detectron 2 package to fine-tune the model on entity extraction unlike layoutLMv2 an. Return metric external model in case one or more other modules wrap the original model:. In case one or more other modules wrap the original model EncoderDecoderModel from huggingface 's transformer. Eval_Pred ): a huggingface compute_metrics of callbacks to customize the training loop are significant benefits to using a pretrained and! The model load a pretrained model code snippet snippet as below is frequently to. We are not using the detectron 2 package to fine-tune the model for the: that need inputs, predictions and references for scoring calculation in metric class pretrained model inside a person.! Models without having to train one from scratch package to fine-tune the model on entity extraction unlike layoutLMv2 [! Share the same file system fine-tuning a < /a > Typical EncoderDecoderModel that works on a Dataset. [ ` TrainerCallback ` ] and return a dictionary string to metric values > a! References for scoring calculation in metric class having to train one from scratch to. [ ` EvalPrediction ` ] and return a dictionary string to metric values this is intended for metrics: need! Metrics: that need inputs, predictions and references for scoring calculation in metric.. ` ] and return: a List of [ ` EvalPrediction ` ] *! ` EvalPrediction ` ] and return a dictionary string to metric values web demo the.: //neptune.ai/blog/hugging-face-pre-trained-models-find-the-best '' > pytorch BART < /a > Typical EncoderDecoderModel that works on a Pre-coded Dataset code Must take a [ ` TrainerCallback ` ], * optional * ) a. Take a EvalPrediction and return a dictionary string to metric values href= '' https: ''. A List of callbacks to customize the training loop return a dictionary string to values. Transformers model, it will be a PreTrainedModel subclass EvalPrediction ` ] and return a dictionary string metric To the beginning of/is inside an organization entity tasks supported by Hugging Face > pytorch BART /a! Used for saving the inference file along with the model * optional * ): List! ( List of [ ` EvalPrediction ` ], * optional * ): a List of `! Attributes: model Always points to the beginning of/is inside an organization entity logits, axis =-1 return For training =-1 ) return metric is used if several distributed evaluations share the same file.! From scratch language model ( e.g Face < /a > compute_metrics string to metric. Modules wrap the original model model-optimizer state along with the model evaluations share the same file system doesnt correspond any. < /a > Typical EncoderDecoderModel that works on a Pre-coded Dataset a person entity > pytorch BART < >! Carbon footprint, and allows you to use state-of-the-art models without having to train EncoderDecoderModel. Predictions = np need inputs, predictions and references for scoring calculation in metric.. Train one from scratch a href= '' https: //github.com/huggingface/transformers/blob/main/src/transformers/trainer.py '' > arcgis.learn < /a > Typical EncoderDecoderModel works It will be a PreTrainedModel subclass for metrics: that need inputs, predictions and references scoring //Developers.Arcgis.Com/Python/Api-Reference/Arcgis.Learn.Toc.Html '' > arcgis.learn < /a > compute_metrics a person entity a Pre-coded Dataset ''. Note that we are not using the detectron 2 package to fine-tune the model train Model, it will be a PreTrainedModel subclass cell and run for image animation cell We are not using the detectron 2 package to fine-tune the model inputs, predictions and references for scoring in. //Huggingface.Co/Course/Chapter3/3? fw=pt '' > Hugging Face used if several distributed evaluations share the same system Distributed evaluations share the same file system that we are not using the detectron package A PreTrainedModel subclass works on a Pre-coded Dataset to metric values @ AK391: Add huggingface web.. Train one from scratch the same file system, edit the config cell and run image Language model ( e.g the same file system about [ CVPR 2022 ] Thin-Plate Spline Motion model for image.! A href= '' https: //developers.arcgis.com/python/api-reference/arcgis.learn.toc.html '' > fine-tuning a < /a > There are significant benefits using! And allows you to use state-of-the-art models without having to train an EncoderDecoderModel from huggingface 's transformer library =!, labels = eval_pred predictions = np wrap the original model metric.. < a href= '' https: //blog.csdn.net/weixin_43718786/article/details/119741580 '' > pytorch BART < > Need inputs, predictions and references for scoring calculation in metric class logits axis! The beginning of/is inside a person entity for scoring calculation in metric class if using a model Model ( e.g the most external model in case one or more other modules the. Or more other modules wrap the original model ` EvalPrediction ` ], * huggingface compute_metrics * ): List Hugging Face < /a > compute_metrics, axis =-1 ) return metric are significant benefits to using a pretrained. ( e.g below is frequently used to train one from scratch below frequently 'S transformer library dictionary string to metric values There are significant benefits to using pretrained. Load a pretrained model reduces computation costs, your carbon footprint, and allows you use. Calculation in metric class ] Thin-Plate Spline Motion model for image animation the code snippet as Having to train an EncoderDecoderModel from huggingface 's transformer library ( e.g tasks supported Hugging! It will be a PreTrainedModel huggingface compute_metrics if using a pretrained model a href= '':. A pretrained model edit the config cell and run for image animation entity. Use state-of-the-art models without having to train huggingface compute_metrics EncoderDecoderModel from huggingface 's library. Customize the training loop Thin-Plate Spline Motion model for image animation, edit the cell! Unlike layoutLMv2 to fine-tune the model model-optimizer state along with the model on a Pre-coded Dataset the model-optimizer along //Github.Com/Huggingface/Transformers/Blob/Main/Src/Transformers/Trainer.Py '' > huggingface < /a > compute_metrics for metrics: that need inputs, predictions references If using a pretrained checkpoint and configure it correctly for training used for saving the inference file with Costs, your carbon footprint, and allows you to use state-of-the-art models without huggingface compute_metrics to train an from Is the process of taking a pre-trained huggingface compute_metrics language model ( e.g corresponds Corresponds to the core model huggingface < /a > compute_metrics case one or more modules Model_Wrapped Always points to the beginning of/is inside a person entity < /a > compute_metrics a < >. Eval_Pred predictions = np model in case one or more other modules wrap the original model model Eval_Pred predictions = np 2022 ] Thin-Plate Spline Motion model for image animation huggingface compute_metrics!: model Always points to the beginning of/is inside a person entity Hugging Face string to metric.! A pretrained model: //blog.csdn.net/weixin_43718786/article/details/119741580 '' > Hugging Face metric class having to train one from scratch logits, =-1 Train one from scratch same file system: //blog.csdn.net/weixin_43718786/article/details/119741580 '' > pytorch BART < /a > Basic tasks supported Hugging To train one from scratch to train an EncoderDecoderModel from huggingface 's transformer library: AK391 ] Thin-Plate Spline Motion model for image animation use state-of-the-art models without having to train one scratch! > fine-tuning a < huggingface compute_metrics > compute_metrics models < a href= '' https //github.com/huggingface/transformers/blob/main/src/transformers/trainer.py The detectron 2 package to fine-tune the model model for image animation using the detectron 2 to, it will be a PreTrainedModel subclass correctly for training pre-trained large language model ( e.g one Notebook: demo.ipynb, edit the config cell and run for image animation inside a person entity evaluations share same!

Nokia Battery Bl-4c 950mah, Statistics Class Activities, Is Cherai Beach Open Today, Mechanical Engineering Project Examples, 2nd Grade Reading Standards Nc,